text
stringlengths 144
682k
|
|---|
Fitness & Nutrition
Is your power meter really telling you what you think it is?
Scientific study reveals significant discrepancies between data output and actual power - but what does it all actually mean?
The use of power meters, both in the pro peloton and club run, has risen significantly in recent years. Nowadays some riders even have multiple power meters on different bikes, something which has led to the need to compare data from one power meter to another.
However, the accuracy of comparing power meter data has been called into question by a recent study (you can find the original study here) in which 54 different power meters were tested. The aim of the study was to see just how good modern day power meters really are, and whether they are giving us power figures that are both accurate and precise (more on that in a second).
The researchers cleverly designed their testing so they were able to calculate the true average power that a rider was producing and then compare that true power value with the value recorded on various power meters.
Power meters are now commonplace from the club run to the WorldTour peloton (Pic: Allan McKenzie/
How accurate is your power meter?
To do this, riders were placed on a treadmill set to a one per cent incline – but they were facing downhill (see figure one). A cord was attached to the seatpost, which in turn was linked to a weight to hold the rider in the same position on the treadmill.
A second weight was added to the cord which, if the rider didn’t pedal, would pull them backwards up the slope. Because the mass of the additional weight was known, the researchers could then calculate power output, while also accounting for the loss of power through the drivetrain.
This setup allowed the researchers to compare the average power as recorded on the various power meters to the true value. Each power meter was tested a number of times with a number of different riders.
Accuracy and precision
As I mentioned at the top, throughout this article we are going to looks at the accuracy and precision of power meters. Therefore, before we get too much further, it’s a good idea to start with a quick explanation of these two concepts
The dartboard below can best illustrate accuracy:
All the darts are close to the centre but they are scattered around the bullseye. The darts are accurate in that none of them are far away from the bullseye, but they aren’t very precise as the darts are scattered.
Accuracy is important when it comes to power meters as it allows one person’s power data to be compared to another. It also allows those who are lucky enough to have power meters on multiple bikes to compare their output on one bike to the other.
Precision is best visualised by this second dartboard. All of the darts are very close to one another (precise location) but they are no-where near the bullseye (not very accurate).
Precision is important as it allows data from one day or one effort to be compared to another. Precision is arguably more important when it comes to power meters as without it they are just generating random numbers.
Finally, if we combine both accuracy and precision we get this:
All of the darts are close to one another and in the bullseye. This is the ideal world when it comes to power meters as it means the data can be compared across multiple power meters on multiple days.
Average power comparison and accuracy
Normally when we compare power outputs, it is over a certain period of time – maybe the length of a climb, a TT or even an interval in training. This is exactly what happened in the research. The average power for a certain period of time was compared with the true power value
The accuracy of a power meter gives us an understanding of how close it is measuring to the true power value. In our dartboard example this is how close each dart is to the bullseye. In power terms, essentially we are asking if the 300w being measured on a power meter is the same as a true 300w.
We can see the results in the chart below.
What we need to look at in the chart is the mean deviation (%), which represents the average accuracy across all power meters from the specific brand. Essentially it is a comparison of the average position of the darts to the bullseye across all tests of all models of one brand. Let’s use SRM as an example, which shows values of -0.5% ±2.4%. This means across the test, SRM power meters overall under read by -0.5% – however, there was a range of accuracies; if we look at the average difference between this average and each individual SRM power meter (the standard deviation for those who remember GCSE maths!), we get a figure of ±2.4%.
We’re quite used to seeing power meters manufacturers stating their claimed accuracy – for example, Stages power meters are advertised as ±2%. However, what we can see from the study is that most power meters actually fell outside of their advertised level of accuracy in terms of mean deviation, given here in brackets: Stages (±2%), Power2max (±2%), Polar (±2.5%), Verve (±0.37%) and Rotor (±1%). Only SRM (±1%), Powertap (±1.5%), Quarq (±1.5%) and Garmin (±2%) were within their stated level of accuracy.
I should note here that the Stages discrepancy could be explained by the fact that it only measures on one side. Therefore, if the riders who tested the Stages had a right leg bias this may explain why the Stages under reads slightly.
However, these results alone don’t quite tell us the full story.
When we look at the overall accuracy of all units of a single brand, we get a distorted figure. For example, if two units were tested and one read eight per cent too high and the other eight per cent too low then overall that brand would have a mean deviation of 0%. It would appear the brand in question has perfect accuracy but in reality each individual unit is terribly inaccurate.
For each brand of power meter in the test, apart from the Garmin, Polar and Rotor models, multiple units were tested multiple times, so this allows us to see how accurate each individual power meter is.
From the graph above we can see the accuracy (mean deviation %) of each individual power meter compared with the true power value.
The more grouped the dots, the less variation in accuracy between units. We can see that for the SRM power models that spread of data is +4 to -4 %. This means if you recorded the same session on two power meters, one may measure 8% higher than the other.
The worst performer in the test was the Stages power meter, where we can see an 11% spread in power figures, so if you recorded the same 300w interval on two Stages power meter you could record as high as 309w and the other as low as 273w – a huge variation in power terms.
The Powertap and the Verve power meters came out of this test well and so, based on this study alone, are a good bet if you want to have multiple power meters on a number of bikes. However, only three Verve power meters were tested, compared with 12 SRMs for example, so they may have just been particularly good examples.
Mean deviation also gives us a fresh perspective on the marketing claims of power meter brands. Let’s use SRM as an example (and that is only one example from a whole range of brands) and a claimed accuracy of ±1%. Most customers may think this means each and every power meter is ±1% accurate, but it is more likely this relates to the overall average.
Therefore, a customer may be buying a power meter thinking it is ±1% accurate, however in reality it is actually within a range of approximately ±4% based on the data from this research. However, if they brought ten power meters, the average accuracy across them all would be -0.5%.
Despite all that, the precision of a power meter is arguably of much greater importance than how accurate is it.
In this research, the coefficient of variance can be considered a good measure of precision. Basically what this is measuring in our dartboard example is how far the darts are from one another.
Precision essentially means that the numbers are reliable test after test. So the 300w your power meter is measuring one day is exactly the same as the 300w the next day. To be able to track your performance and train effectively, precision is more important than accuracy.
Power meter precision is arguably more important than accuracy (Pic: Factory Media)
When considering the coefficient of variance numbers, you need to keep in mind that a lower number is better. For example, overall across the three Verve power meters tested the coefficient of variance was only 0.6%. On average, across those three Verve power meters, if you measure 300w on one day the most you could expect that to differ by on another day is 1.8w for the same power.
The only unit that comes out badly in terms of average precision is the Stages power meter, which had a coefficient of variance of two per cent. Here, 300w might be 300w on one day and 294w or 306w the next day.
Now just as with the average accuracy, the average precision doesn’t tell the entire story. We can also look at the precision of the individual power meters across multiple tests (figure three).
This graph shows some interesting results. To interpret the results, if the dots are closer to the x axis then the precision of that unit is good, so the higher the dot the poorer the precision.
A number of things jump out from this.
Firstly, the SRM, Powertap, Rotor, Quarq and Verve power meters all display good precision and, with each of these, the values are under 1.5% – on any given day 300w may range from 295.5 to 304.5w. That’s not bad at all and probably less than the daily variation in your form.
However, we should also look at where those dots fall. With SRM, Powertap and Verve the dots take a pyramid shape, so most power meters are in the 0-1% range rather than the 1-2% range. So, if you were to buy a power meter from these brands, you are more likely to get a power meter with a coeffecient of variance of less than one per cent, as opposed to one with between one and two per cent variance.
The worst performer on the test was the Stages power meter. One unit showed a coefficient of variance of 6% – that means your 300w could be as much as 282-318w, which is well outside of what is useful when it comes to trying to ride to power or monitor your performance. What is also strange about the Stages power meters is that while some units were very precise, others were very imprecise.
Let me put this into context. With the least accurate power meter with the worst precision, your true 300w may be being measured as 256-298w on any given day. On the other hand, with the most accurate and precise power meter a true 300w could be measured as 297-300w.
Second-to-second comparison
So far we have looked at the average power over a longer period of time and how accurate and precise that measurement is. However when we ride we don’t maintain a perfectly smooth power output. Instead, power output on the road is what we call ‘stochastic’ – it varies greatly from second to second.
To put this into context, a 300w average power output for 10 seconds may look something like this on a second-to-second basis:
300w, 305w, 295w, 298w, 307w, 295w, 302w, 300w, 294w, 304w
What we have to consider is that the relevant precision and accuracy of power meters is affecting the recorded power numbers on a measurement-by-measurement basis. So for the same power 300w power output for 10 seconds the recorded power might look something like this:
307w, 298w, 304w, 293w, 305w, 291w, 295w, 306w, 311w, 290w
Overall the average power for the 10 seconds is identical, but if we were looking for a peak one-second power within the effort we would get big differences between the true value and the measured value.
I have personally done some research on this to follow-up on the study. I had a coaching client ride on a laboratory quality stationary trainer, a Cyclus 2, which replaces the rear wheel much like a Wahoo Kickr or Tacx Neo smart trainer. The Cyclus 2 is a highly accurate piece of kit calibrated to laboratory standards for use in scientific research.
We recorded power output on both the laboratory trainer and the rider’s power meter for three and 12 minutes, before comparing both the differences in average power and second-by-second power output. The results showed that while the average power was relatively consistent – two different power meters consistently read 14w and 19w below the laboratory trainer – there were massive differences in the second to second power, with anything up to +60w to -48w.
This is really important when it comes to looking for max power, so the point to take away from this is that, in my experience as a coach, a power meter only becomes useful for periods of five seconds or more. Any shorter and there is too much susceptibility for bad data to creep in and affect the power scores.
SRM performed well in the study (Pic: Factory Media)
What have we learnt from this?
The quick conclusion to draw is that your power meter might not be telling you exactly what you think it is. Indeed, the 300w you are seeing on your computer might not be quite the same as the 300w your ride partner is apparently producing. On top of this, with some power meters it appears that 300w today might not even be the same as 300w tomorrow.
In short, some power meters are more accurate and precise than others. When choosing a power meter, I would urge you to look at the data presented in this article about the accuracy and precision of various power meters, as well as considering other factors that may affect your purchasing decision, like budget, bike compatibility and aesthetics. At the end of the day your power meter is a training aid, however to be useful in that it needs to be providing you with, at the very least, precise data.
One final thing to mention is that all power meters in this study were calibrated via a zero offset before each test (and where possible), so the levels of accuracy and precision we see here represent the best performance you are going to get from a power meter. It is likely that once you add in real word conditions, the levels of error will increase.
Therefore, to maximise the accuracy and precision of any power meter, make sure you look after it. That means calibrating it where possible and making sure it is well charged before each ride. Another tip is to let it acclimitise to the outside temperature before starting your training. Finally, a great pro tip and something I get the riders I coach to do is to recalibrate your power meter before the first effort of the day. That means you know you are getting the most accurate data you can, when it really matters.
Sponsored by
Newsletter Terms & Conditions
Read our full Privacy Policy as well as Terms & Conditions.
|
Convert decimal 71 to percent
Convert from percent to decimal. Here is the answer to the question: Convert decimal 71 to percent or how to convert 71% into a decimal equivalent. Use the percent to decimal calculator below to write any percent as a decimal.
To change to 'Decimal to Percent, please click here!
Percent to Decimal Calculator
Enter a percent value:
How to convert from percent to decimal
Let's see this example:
Percent means 'per 100'. So, 71% means 71 per 100 or simply 71/100.
• If you divide 71 by 100, you get 0.71 (a decimal number).
See also:
Percent to Decimal Calculator
Percent to Decimal Calculator
Sample Percent to Decimal Calculations
|
Skip to main content
Copyright @ OSU
Section 108: Copyright Exceptions for Libraries and Archives
Title 17, section 108 of the U.S. Code permits libraries and archives to use copyrighted material in specific ways without permission from the copyright holder. This does not replace fair use, which is codified in section 107. Librarians, archivists, and library users can rely on fair use just like everyone else. In fact, in many cases fair use may apply when section 108 does not. Section 108 permits libraries and archives to:
• Make one copy of an item held by a library for interlibrary loan;
• Make up to three copies of a damaged, deteriorated, lost, or stolen work for the purpose of replacement. This only applies if a replacement copy is not available at a fair price;
• Make up to three copies of an unpublished work held by the library for the purpose of preservation. If the copy is digital, it cannot be circulated outside the library;
• Reproduce, distribute, display, or perform a published work that is in its last 20 years of copyright for the purposes of preservation, research, or scholarship if the work is not available at a fair price or subject to commercial exploitation;
• Make one copy of an entire work for a user or library who requests it if the work isn't available at a fair price.
The following restrictions must be observed when appealing to this exception:
• It applies only to libraries and archives open to the public, or to unaffiliated researchers in a specialized field (OSU Libraries meets this exception);
• Copies cannot be made for commercial purposes;
• The copying cannot be systematic (e.g., to replace subscriptions);
• All copies made under this exception must include a notice stating that the materials may be protected under copyright.
The current version of section 108 was updated with the passage of the Digital Millennial Copyright Act in 1998
Section 108 Spinner
screenshot of section 108 spinner
The Section 108 Spinner from the ALA Office of Information Technology Policy provides a novel way to review the exceptions for your particular situation.
FAQs about Libraries & Copyright
Q. How does copyright relate to digitizing content in libraries?
Q. How does copyright apply to interlibrary loan?
Q, How does copyright apply to course reserves?
Q. How does copyright relate to digitizing content in libraries?
A. You may occasionally encounter the idea that libraries should "just digitize everything in the library and put it online!" Alas, this would constitute infringement if the works are protected by copyright and permission was not obtained. Materials in the public domain may be digitized without permission or restrictions. Two recent landmark cases related to mass digitization are Authors Guild v. HathiTrust and Authors Guild v. Google. While the defendants (who digitized copyrighted works) did emerge victorious, it's important to note that they weren't making entire copyrighted works freely available for anyone to read online. Their uses (creating a searchable database; providing access to the print-disabled), were held to be transformative by the courts, an important in fair use considerations. Libraries may be able to digitize portions of their collections for specific purposes based on a fair use analysis.
Q. How does copyright apply to interlibrary loan?
A. Interlibrary loan is permitted under section 108 of the Copyright Act. Both the lending and the borrowing libraries have specific responsibilities they must fulfill. The Copyright Act lacks guidance on, for example, how many articles from a library may request from one journal in one year. Copyright holders and libraries tried to agree on these details when drafting the CONTU Guidelines. Full agreement was never reached, and the CONTU guidelines don't carry the force of law.
Q, How does copyright apply to course reserves?
A. Lending of physical books held by the library is permitted under the first sale doctrine. In other instances, such as making copies of articles and checking them out to students, libraries may rely on fair use to justify course reserves. A recent landmark case related to electronic reserves is Cambridge v. Pattonin which a group of publishers sued Georgia State University for their liberal e-reserves policy. The courts held GSU to be the prevailing party, finding fair use in the majority of alleged infringements.
Q. What if an individual or library requests an entire work for private study or scholarship?
A. It may be appropriate to copy an entire work when requested for private study or scholarship (if it is out of print or can't be obtained at a fair price). Use this checklist from the Copyright Advisory Office at Columbia to help you determine if a request meets all the requirements.
Q. Is the library responsible for patrons making photocopies in the library? Are librarians the "copyright police"?
A. Libraries and their employees are not liable for users making copies in excess of fair use as long as the library displays a notice warning users that content may be protected by copyright. The text of the copyright notice can be found on p. 20 of Circular 21 from the Copyright Office.
Q. How does copyright apply to library lending? What is the "first sale doctrine" and how does it apply to libraries? Why are the rules for lending e-books different than print books? How does copyright relate to used book sales?
A. The first sale doctrine (section 109[a]) of the Copyright Act) allows owners of a legal copy of a tangible (physical) work to resell, rent, lend, or give away that copy without the copyright owner's permission. This explicitly permits libraries to lend books from their collections. It also allows owners of a physical book to resell that book, creating the used book market.
Purchasers of e-books do not have the same unrestricted rights as purchasers of physical books. The first sale doctrine came into effect before the digital age. Congress, courts, and the Copyright Office have been hesitant to extend the first sale doctrine to digital content. Additionally, libraries often license rather than purchase e-books, which is another reason first sale doctrine would not allow them the same freedom as lending physical books.
|
Saturday, October 18, 2014
"Anyone who tells a lie has not a pure heart and cannot make good soup." ~ Ludwig van Beethoven
He was born in the city of Bonn, in the Electorate of Cologne, in what would later become part of Germany, on December 15, 1770, or perhaps, December 16 of that year. His happy father had the boy's birth registered at the town hall as was customary at the times, on 16 December early, so scholars differ on the dates of his birth. As mine is the December 15, I choose to think that his was as well. His father, a musician of some renown, known more for his drinking than his singing, was determined that young Ludwig would be another Mozart. Actually, the world really didn't need the one it already had, with the exception of Mozart's “Mass in C minor” and “Don Giovanni”, the last two pieces Wolfgang wrote that are truly worth hearing; the rest is just the same piece written 600 times. But, that's a story for someone who actually cares enough to write anything more about Mozart, so we're safely done with him.
"I shall seize fate by the throat." ~Ludwig van Beethoven
Young Ludwig didn't fulfill his father's wishes of becoming the “next Mozart” nor even the “next Haydn”, which is really okay. Young Ludwig, when not busy fighting off the night terrors that his drunk father visited upon him, if he caught Ludwig trying to play piano after bedtime, took up the viola at a rather young age and played in both of the orchestras in Bonn. Like every viola player everywhere, while he was sawing his way through some boring part, written by you-know-who, he must have been thinking to himself, “Mein Gott! We must have better viola parts around here! These are terrible!” Just kidding. But, the music of the time had already seen the peak of the Classical era and the viola parts had always been awful. Haydn, who danced a merry tune to his patrons managed to write 104 symphonies, while Mozart wrote 41. Other composers wrote just as many and have been lost to history. If they're anything like any piece of music written by Louis Spohr, this is a good thing. Spohr is a bore; bland found a home when Spohr was writing music in the 19th century.
Beethoven's viola. Vienna, Austria
Ludwig was an entirely different matter. Here, for the first time in a long time, well, really in forever, a composer had arrived in Vienna (the happening place for Classical music; very cutting-edge back then) and proceeded to turn the place on it's ear. No longer would the composer bow down to the whims and desires of the nobility. When Ludwig played piano, he was the prince and expected to be treated as such. Salon evenings would turn into competitions, with the young lion raging up and down the keyboard furiously and with a technical prowess that none had seen before. He was also lighting up the musical world with his compositions.
"To play without passion is inexcusable." ~Ludwig van Beethoven
His writing career is traditionally broken out into three periods, although that is a simplification; his first period is considered as occurring during the last years he spent in Bonn and his first years in Vienna. They mark the time when his first piano trios were written and his first two symphonies. This music is still very much in the Classical mold, although there are signs beginning with his 1st symphony that something different is going on in his head. The first movement opens with a forte and immediately drops to a piano, unheard of at the time. This is a reaction to the “stepped” way dynamics were approached previously. It's a small distinction, but a telling one. His 2nd symphony, like his 1st are charming works; almost too airy for Beethoven. This was all about to change in 1803.
"I am a rock-and-roll violist. I kick ass." ~ViolaFury
Firstly, he declared to a colleague that he was unhappy with the way his writing was going; he didn't think that he was achieving the clarity and force of spirit that he was looking for. He didn't want to be thought of as just another salon artist, or have his art be trivialized. It was a higher calling to him and he wanted to bring to it the proper attention and sought to honor his own muse and he was passionate about it.
Secondly, he started the rough draft for his 3rd symphony, and was going to dedicate it to Napoleon; it would be the “Bonaparte” symphony. Then, Napoleon got the bright idea of conquering the world, and Beethoven was furious. He said “So he is no more than a common mortal! Now, too, he will tread under foot all the rights of man, indulge only his ambition; now he will think himself superior to all men; become a tyrant!” Ludwig angrily scratched out the dedication page and renamed the mighty 3rd, the “Eroica” or “Heroic” symphony in E Major. And it is truly a magnificent work! I've known it since I was a kid and I've played it several times.
In the third movement, Beethoven completely shreds what remained of the Classical era and goes on a towering rampage of fury. The movement starts off sounding like a funeral dirge and it is dark indeed. He approaches the development with trepidation and just when you think he is going to hesitate and return to the main theme, he cuts loose with sixteen measures of unmitigated rage. It is almost Mahlerian in it's complexity and breadth. Spent, he then returns to the quiet, and ends with a syncopated, almost jazzy little fillip that ends the movement; it's almost as if he's saying “there, I got my musicrage out and I'm good, now, Nap”. But, not to be flippant and undermine the importance of this symphony and this movement and those particular 16 measures; it changed the musical world. We went from the Classical era to the Romantic era in that small amount of time.
"Music is a higher revelations than all wisdom and philosophy." ~Ludwig van Beethoven
It was also the end of Beethoven's early period and as he moved into his middle period, he would see some of his most productive and audacious work written and performed. He wrote his string quartets, along with the famous opus 18, which I love to play. Beethoven's central key is C minor, which puts him in the relative key of EMajor. Being a viola player, this is a natural state for us, as it encompasses our lowest register. I don't know if he thought along those lines, as he once told a violinist “What do I care about your damned fiddle, when the Spirit seizes me!?” So, there's no real indication that he favored violas, although playing anything written during and after Beethoven's lifetime is infinitely better for violas.
"Don't practice only your art, but force your way into it's secrets, for it and knowledge can raise men to the divine." ~Ludwig van Beethoven
But spirit and muse were all with him; when we think of the terrifying 5th symphony, it really beggars belief to think that a composer would so audaciously build an entire symphony around four notes: Da-Da-Da-Dum. These notes are repeated throughout the entire work, not just the first movement. There are a few things about this symphony that once again, set it apart from so many other works, then and now. The constant interweaving of the thematic material between all sections has to flow like electricity and the entire work is in constant flux. The other thing that I find remarkable and I've played too many symphonies to count, is that the only other symphony that I've ever played that has a bridge (meaning no pause or break) between the third and fourth movements is Sibelius' 2nd Symphony and that is just as brilliant and astounding as it is with Beethoven.
"The goosebumps start at 4:10." ~ViolaFury
Beethoven was not an easy person to like or get to know. Like many artists and composers, he lived inside his head, but he had an additional reason for doing so; he began to go deaf at the age of 23, and by age 30, was profoundly deaf. He thought nothing of standing up in a pub and yelling “So and So is a Donkey's Ass!” and he was irascible and often seemed unkind. But, through his music; through the splendor of his “Missa Solemnis” and his 9th Symphony, with the most-cherished theme of all time, the spectacular “Ode to Joy” you know that Beethoven understood the human condition and that he tried his best to express that greatness and the humanity and heart that lie within us. There's a very good reason he is my muse and always has been, since, like age 4. He's always been a part of my life and he expresses the greatness I would love to be able to say I've tried to achieve as a human being.
For those of you who may not know, I am under treatment for essential tremor, and have been for the last year. It is inherited and my mother had it. It prevented me from playing viola for several years and I was symptomatic as long as twenty years prior, with worsening symptoms over the last five years; diagnosing was difficult and arduous, but my neat-o, keen-o neurologist figured it out. The Parkinson's Disease Foundation is paying for my treatment.
However, being a Wallace, and more prone to kick ass, take names and make retribution four-fold when at the top of my game, I am not one to let this sort of thing stop me. Seeing as I can't really take take vengeance out on a condition, or a disease, I chose the next best thing: I undertook an audition for the Tampa Bay Symphony and am playing viola again, beginning this season. SQUEE! Our first concert includes Beethoven's 5th Symphony, which is an amazing work. We will be playing Elgar and Shostakovich later on and I am so very excited and proud to be a part of this excellent group.
To put this into a better historical context, I am playing on my wonderful Italian viola that was built only ten years after Beethoven's death, in 1827! I'll be writing on the other concerts that we will be performing as time comes closer and I'll put up links as they will be broadcast, as well. I'm also taking 2 programming classes through the good ol' University of Michigan, and this is a lot of fun as well and am looking forward to #NaNoWriMo, where we will continue "Music of the Spheres, Again." No, really, that's the title of the sequel. If they can get away with that in "Sharknado" I figure I can pull it off here. Happy #ROWing!
Post a Comment
|
Sentence Examples
• If you are unable to write, you may dictate the test to the professor.
• Hitler is an example of someone that used his power to dictate to a group of people.
• The ruler's new dictate required for all citizens to have health insurance through one organization.
• It was hard to stay awake as I listened to my boss dictate a long message.
• As president, I would rather take a vote from the members of the club than dictate a decision for everyone.
What's another word for dictate?
comments powered by Disqus
|
User Tools
Site Tools
Signature Authentication
While public key encryption is a method of encrypting data, signature authentication or public key authentication is an alternative method of identifying yourself to a login server, instead of typing the password. Under most circumstances, it is considered considerably more secure and and at the same time more flexible then conventional password authentication.
When using the latter, you prove you are who you claim to be by proving that you know the correct password. The only way to prove you know the password is to tell the server what you think the password is. This means that if the server has been hacked or your connection to the server is being spoofed, an attacker can learn your password.
Public key authentication solves this problem by using public-key cryptography not for encryption of data, but for merely creating and authenticating digital signature.
You generate a key pair, consisting of a public key (which everybody is allowed to know) and a private key (which you keep secret and do not give to anybody). The private key is able to generate signatures . A signature created with your private key (virtually) cannot be forged using some other key; but anybody who has your public key can verify that a particular signature is genuine.
With the program of your choice, you generate a key pair on your own computer (which should not already be hacked…), and copy the public key to the server, in our case running OpenWrt. Then, when the server asks you to prove who you are, PuTTY can generate a signature using your private key. The server can verify that signature (since it has your public key) and allow you to log in.
Now in case the server is hacked or the connection is being spoofed, the attacker does not gain your private key nor your password, but merely one signature. And signatures cannot be re-used, so they have gained nothing.
NOTE: ssh already make use of public key authentication, but only to authenticate the server. If the server has been hacked, the auth would still succeed.
Implied Weaknesses
• Public Key Authentication can be brute-forced. The expenditure to do this with modern technology is so high that it can be considered secure for most purposes. This may change in the future, either due to increases in raw computing power or due to a mathematical breakthrough.
• A private key is usually 2048 bits (or more) in size, so (almost) no one can memorize it. Therefore, it has to be stored. Access to the storage device must be controlled or the key must be encrypted.
• The client system from which a login is performed has to be pristine.
• 3rd party authentication of public keys is not an issue when you personally put public keys on the server or give them to other people. However, if the server has been compromised, the perpetrator can replace the public key (besides doing many other things).
User negligence
Security never relies on some algorithm, but on the user comprehending the principles and acting sanely. Usually there is a chain of security and every link must be kept secure on its own, for the whole concept to work. Therefore every encryption method has some immanent weaknesses which once met, define a method as considerably secure. But then there is the user…
* If success of brute-force attacks is reported as high, do not use this encryption method any longer :-P
• Pristine Client: You should not store you private key on a memory stick and then attempt a login from a host in an Internet café or from your friend, since the host could be hacked. But if at least the BIOS of this host is pristine, you could very well boot from a Linux DVD, or from your USB stick. That way you'd have a clean OS in which you could use your private key to encrypt data or generate signatures.
• Access to Private Key. Your private key is nothing but a huge number. Nobody can know it! The private key is going to be stored in a digital form on some storage device, like your hard disc or some memory stick. And that way, anybody who gains access to that is able to copy your key. For this reason, a private key is ALWAYS encrypted when it is stored, using a passphrase (now this has to be a very long password) of your choice. In order to generate a signature, your client (e.g. PuTTY) must decrypt the key, so you have to type your passphrase.
The passphrase can make public-key authentication less convenient than password authentication because the passphrase is much longer then a password. Every time you log in to the server, instead of typing a short password, you have to type a long passphrase. One solution to this is to use an authentication agent, a separate program which holds decrypted private keys and generates signatures on request. PuTTY's authentication agent is called Pageant. When you begin a Windows session, you start Pageant and load your private key into it (typing your passphrase once). For the rest of your session, you can start PuTTY any number of times and Pageant will automatically generate signatures without you having to do anything. When you close your Windows session, Pageant shuts down, without ever having stored your decrypted private key on disk. Many people feel this is a good compromise between security and convenience.
Here is a german wiki from about security:
Assuming you have a key in ~/.ssh/ on your host computer, you can copy it to the OpenWrt system in just one command:
Thereafter, you can log into the OpenWrt system from your host computer without the need for a password.
doc/techref/signature.authentication.txt · Last modified: 2016/03/25 01:05 by owfan
|
Driving Barefoot
Though it is a popular belief, driving barefoot is not actually illegal. It’s perfectly legal to drive a car barefoot in all 50 States, Canada, and the UK. Some consider driving barefoot to be safer than driving with shoes, especially compared to flip-flops or high heels.
Driving Barefoot
According to the Missouri State Highway Patrol:
According to the State of Michigan:
Some states have issued warnings about driving with such inappropriate footwear:
Drivers should wear safe footwear that does not have an open heel such as flip-flops or sandals because these types of shoes can slip off and wedge under accelerator or brake pedals. High-heeled shoes can also be problematic because heels can get caught in or under floor mats and delay accelerating or braking when needed. Driving in bare feet, socks, or stockings can also be dangerous causing your feet to slip off the gas or brake pedals.3
Simulator studies have also found that flip-flops made overall deceleration up to 0.13 seconds slower – the equivalent of travelling a further 3.5m at 60mph.4
Jason R. Heimbaugh has contacted all 50 States and the district of Columbia. You can read the responses at Barefooters.org
1. http://www.mshp.dps.mo.gov/MSHPWeb/Root/March2013FeaturedStatutes.html
2. http://www.michigan.gov/documents/msp/TSS_Field_Update_16_172717_7.pdf
3. http://www.dmv.virginia.gov/general/#news/news.asp?id=7281
4. http://www.dailymail.co.uk/news/article-2396615/Flip-flops-dangerous-drive-heels-10-near-miss-wearing-them.html
|
Q: Who should get flu vaccine?
Q: What are the biggest misconceptions about the flu vaccine?
Q: How long is someone with the flu contagious?
A: The interesting thing about the flu, according to the CDC, is that that infected people can pass the flu to others before they even realize they are sick. Most healthy adults may be able to infect others one day prior to developing symptoms and continue to do so five to seven days after becoming sick. Young children and people with weakened immune systems may infect others for longer periods.
Q: Why is it important for children to be vaccinated?
A: Children younger than 2 are susceptible to severe respiratory infections such as pneumonia and bronchitis if they get the flu, said Dr. Tina Tan, infectious disease specialist at Children's Memorial Hospital. She added that children tend to shed the influenza virus quickly, in high amounts and for a long period, a particularly hazardous dynamic in closed daycare or school settings where the virus can disperse easily.
"Many epidemics start from a school-aged child that gets influenza, he then spreads it to everybody in his classroom or his school, and then everybody brings it home to their families and they spread it among their families," Tan said.
Q: What happened to the H1N1 virus that resulted in the 2009 pandemic?
A: H1N1 is still around and is one of the three strains of influenza virus that medical experts have determined will be most prevalent this season. The H1N1 virus, which was first detected in the United States in 2009, was a unique combination of influenza virus genes never previously identified in animals or people, according to the CDC. There was a rush to develop a vaccine for the new strain, but production delays caused shortage of vaccine across the country.
"H1N1 has been included in the vaccine, so getting the influenza vaccine does protect you from that virus and other related viruses," said Gerber. "Right now, we do not expect any shortages of vaccine."
Copyright © 2017, Chicago Tribune
|
Heavy Coffee Consumption Could Lead To Vision Loss, Study Suggests
04/10/2012 16:49
According to new research, heavy caffeinated coffee consumption could be associated with an increased risk of exfoliation glaucoma, which can lead to vision loss.
The study is the first to examine the link between caffeinated coffee and the eye condition among the US population.
Top Foods For Eyesight
According to a statement, participants in the study who drank three cups or more of caffeinated coffee daily were at an increased risk of developing exfoliation glaucoma or glaucoma suspect.
However, researchers did not find associations with consumption of other caffeinated products, such as soda, tea, chocolate or decaffeinated coffee.
Top Tips For Healthy Eyes
"Because this is the first study to evaluate the association between caffeinated coffee and exfoliation glaucoma in a US population, confirmation of these results in other populations would be needed to lend more credence to the possibility that caffeinated coffee might be a modifiable risk factor for glaucoma," said Kang.
The research is published in Investigative Ophthalmology & Visual Science.
Suggest a correction
|
Introduction: Mini Security System Project
How does this mini security project work?
Firstly, the Sharp 2Y0A21 sensor will detect the distance between the sensor and the object (in our case, human) and the data (distance and ADC) will be shown on the touch screen in radial gauge form. As the distance goes up, the ADC will go down and vice versa as if they are in a push-pull relationship.
Secondly, if the distance between the sensor and the user is too close, the buzzer will buzz and the red LED will be lighted up. This will usually scare the user (e.g. burglar) away.
Lastly, we included the NFC so that the house owner can tap using a card to open the door, stop the buzzer and the led light will turn green.
Step 1: Coding Platforms You Need + Background Info You Need
You will need to have Arduino Software (IDE) which you can download here --->
You will also need Microsoft Visual Studio 2015 Community Update 3. You can download here --->
You need to know all about breadboards
Step 2: Gather the Things That You Need.
You will need:
1. 7” Raspberry Pi Touch Screen (optional)
2. Jumper Wires: Male to male for arduino to sensor. Male to female for raspberry pi to arduino.
3. Raspberry Pi 3 Model B (optional)
4. Arduino Uno
5. Resistor
1. NFC
2. LED light
3. Buzzer
4. Sharp 2Y0A21 sensor (To measure Distance and ADC)
The raspberry pi and touchscreen is only used to display our app using Universal Windows Platform in Visual Studio. The app as I have said earlier consist of 2 radial gauge. 1st radial gauge is to display the distance and 2nd radial gauge is to display the ADC value.
Step 3: ALL the Codes You Need
I have attached the files the arduino codes + visual studio codes. There are many ways to codes and feel free to edit them as it is not perfect. I assume that you know how to upload the arduino codes to arduino and deploy the app in VS.
Step 4: Connecting Sharp 2Y0A21 Sensor to Arduino Uno + Code
There are green, black, and yellow wires on the sensor. Of which, the red is the power, the black is for the ground and yellow is to analog.
The red wire will be connected to 5v.
The black to the ground (gnd).
The yellow to the A0.
Step 5: Adding Buzzer
The connection is pretty easy. Just look at the fritzing above. However do note that we are using a lot sensors for this project thus, please make good use of the breadboard as there a limited number of pins.
Step 6: RBG LED Light
Connecting this is pretty easy too. The only tricky part again is to get around making good use of the breadboard.
Step 7: Adding NFC
Adding NFC is simple too. Again please make good use of the breadboard to make it possible to connect all the sensors to the arduino.
Step 8: Connecting Arduino to Raspberry Pi
Connecting Arduino to Raspberry Pi. I have attached everything you need to know in the image above. No worries
Step 9: Connect Raspberry Pi to Touchscreen
Above is a demo on how to attach your raspberry to led touchscreen if you have not done so
Step 10: Copy Codes to Arduino
All the codes are written here to get the 4 sensors: LED light, sharp sensor, NFC, buzzer to be responsive and follow the project mission. Again, it is working 100% but they are many ways to put it and it is not limited to just using one code.
Step 11: Copy Codes to Your UWP
This is to display the radial gauge.
Please install Microsoft.Uwp.Toolkit.UI.Controls NuGet package:
If you want to edit further you radial gauge, please visit
for further information.
Step 12: DEPLOY
All should be working well.
About This Instructable
Bio: Wassup
More by FarahN14:Mini Security System Project
Add instructable to:
|
Quality Recommendations
A) Programme administration at school: coordination, planning, information and monitoring
1. The school has appointed person/s to oversee KiVa who is familiar with the programme, coordinates implementation, and assists when needed
2. The school has an action plan for KiVa implementation which includes
- which year levels have the KiVa lessons?
- what do other classes do?
- when does the KiVa team meet and how do they schedule time for discussions with students?
- cooperation/coordination between KiVa team and other school staff
3. The school has informed the staff, students and parents about the KiVa programme
- Training for staff
- Training for KiVa team
- User IDs and how to use website/intranet
-"Kick off" for students
- Parent newsletter and parent evening planned
- Orientation plan for any new teachers
- Open and objective discussion about bullying
B) Programme implementation at school
Universal Actions/use of KiVa material
- The KiVa lessons have been completed during the course of the year
- The students have been playing the computer game
- Teachers on duty have been wearing the vests
Indicated actions/actions of the KiVa team
- The school has a KiVa team that addresses acute cases of bullying
- The discussion forms have been used at discussions addressing acute bullying or the process has been documented some other way
- The school has scheduled follow-up discussions related to acute case of bullying
C) Monitoring and evaluating the KiVa programme
- Students (all) complete the student survey at the end of each school year
- The school evaluates its KiVa actions regularly
|
Sunday, May 30, 2010
Philadelphia Story - Part 5: A Fledgling Government
Welcome back. After Valley Forge in the winter of 1777-78, fast forward back to Philadelphia in 1790. At the risk of glossing over incredibly important events in our nation's early history, we will fill in a brief time line now. The Continental Army continued to battle the British. The Articles of Confederation were drafted in 1777 and ratified in 1781, for the first time naming this new country "The United States of America". The war for independence continued until the British surrender at Yorktown in 1781. The Constitution was ratified in 1787, replacing the Articles of Confederation, the government was formed, the first president - George Washington - was elected, and New York served as the capitol city from 1785-1790.
It was then that the seat of the government moved back to Philadelphia, again using the Pennsylvania state house and Philadelphia city hall. Washington DC had just been picked as the permanent site but would not be ready for ten more years. So Philadelphia served as the capitol for ten years, during which time the earliest foundations, under-pinning, and traditions were established.
In the left picture, Independence Hall is far left, and the City Hall center. The Supreme Court shared quarters in City Hall. The plaque (r) commemorates this fact.
The main hall was not very big. With an active city hall, it is believed that the early days of the Supreme Court were challenging ones for the court to meet.
In the left picture, Independence Hall is now on the right, and the Congress first met in the two story building on the left. The plaque (r) commemorates this bit of history.
The House of Representatives met on the first floor hall. Most of the furniture was period authentic but not original, the desk and chair used by the first speaker of the house (r) were original.
The Senate occupied the second floor, which had a smaller hall. This is where the Senate got the moniker "the upper chamber". Some of the furniture was original, and the carpet was re-created from the original specs.
It is hard to describe the feelings we had as we walked through these buildings, trying to imagine what it was like there over 200 years ago. President Washington lived in a house very nearby, which no longer stands. The National Park Service is building a replica on the original site.
Walk through Washington, DC today, and it is filled with Federal office buildings - too many, in my opinion, but that is another blog at another time. Having now seen the modest beginnings of our government, it is amazing what profound events were accomplished there.
No comments:
|
World’s first ocean cleaning system to be deployed in 2016
Imagine you being a teenager with a burning desire to clean up someone else's mess..
Doesn't sound like a fun desire to have does it? Well to Boyan Slat, this desire was not only an overwhelming task to take on, but it was one to make a better world for both mankind and aquatic life. So with this desire, he started "The Ocean Cleanup"
The system will span 2000 meters, thereby becoming the longest floating structure ever deployed in the ocean (beating the current record of 1000 m held by the Tokyo Mega-Float). It will be operational for at least two years, catching plastic pollution before it reaches the shores of the proposed deployment location of Tsushima Island. Tsushima Island is evaluating whether the plastic can be used as an alternative energy source.
The scale of the plastic pollution problem, whereby in the case of Tsushima Island, approximately one cubic meter of pollution per person is washed up each year, has led the Japanese local government to seek innovative solutions to the problem.
Plastic Pollution Problem
About 8 million tons of plastic enters the ocean each year (Jambeck et al., 2015). Part of this accumulates in 5 areas where currents converge: the gyres. At least 5.25 trillion pieces of plastic are currently in the oceans (Eriksen et al., 2014), a third of which is concentrated in the infamous Great Pacific Garbage Patch (Cózar et al., 2014). This plastic pollution continues to do the following damage in the ages to come:
The array is projected to be deployed in Q2 2016. The feasibility of deployment off the coast of Tsushima, an island located in the waters between Japan and South Korea is currently being researched.
Aerial view of trash in ocean
I, for one, love that someone is making a change!
I think the graduates of this year should take to heart to make some positive change to the world, even in the smallest ways. To make a change, is to improve the world for both yourself, and the future generation to come. I enjoyed looking into this project, and gives just a small glimmer of hope that we too can make a change to the world for the better.
|
NE 3.0 hours Valuation, Marketing, and Listings - Course Syllabus
Course Syllabus:
3.0 Elective Hours
FICO, in business since 1956, introduced solutions such as credit scoring that have made credit more widely available. So, what is a credit score? A credit score is a number, generally between 300-850, assigned to rate how risky of a borrower a person is. The higher the score, the less risk posed to creditors and more the likely the chances of qualifying for a home mortgage.
Credit scores plays a vital role when lenders decide whether to extend credit for a home loan. According to FICO, over 75% of mortgage lenders and over 90 % of credit card lenders use credit scores when making their lending decisions. A low credit score may result in a denial of credit, or lenders may charge higher interest rates on loans to individuals with lower scores. This practice is known as risk-based pricing.
Equifax, Experian, and Trans Union dominate the credit reporting business. These three agencies use three different models for credit scoring. FICO develops scoring models for Equifax and Trans Union, which is why they are also called FICO scores.
This course will help you understand the role of credit scoring in the real estate industry and the steps involved in qualifying a buyer.
|
When I was approached with this question, I was utterly confused. I was confused because I thought that both questions meant the same thing. It took long and hard thinking before I realized that getting the media we want or wanting the media we get are completely different ideas. To break down the question, I referred to chapter 3 of the text and thought ‘do I believe that media reflects or affects the world?’ and then I questioned ‘what type of media are we talking about?’. To narrow down the possibilities of my discussion I am exploring social media and how it reflects and affects the world. We want the media we get!
We not only want the media we get, but we are addicted to it. Social networking sites such as Facebook and YouTube have over 1 Billion users, and I can assure you that they aren’t being forced to use them. “The media construct(s) and shape(s) our actions, our sense of who we are, our daily and annual routines” (pg. 44). Every morning when I wake up I check Facebook, Instagram, and occasionally Pinterest and Twitter before even getting out of bed! This is a daily routine I do by choice, and have been doing ever since I got my smartphone more than a year ago. Not only do I check my social medias in the morning, but I am constantly checking throughout the day when I have time or when I lose interest in something (sometimes in lecture, oops!). I know I am not alone in my social media habits, and it just goes to show how much we want the media we receive and how heavily if affects our daily lives.
So how does social media reflect the world and reflect me? When I post a photo or status on Facebook, I am projecting it to all my Facebook friends which subconsciously shows my personality and thoughts to everyone I know. What’s great about Facebook is that I only post things I want to show, giving me power over my social media. Who doesn’t want to feel in control of our media? This only makes us want the media we get more, because it’s usually positive feedback. “Media producers, texts, and audiences are all part of the social whole; they are not separate entities” (pg. 59). Basically, the people that design social media networks are real people too and they understand what the world wants because they are also a part of it. The creator of Facebook was a university student that had a great idea with the intent of connecting people around the world. He came up with the idea because he thought people would like it, and in return use it.
Understanding how social media affects and reflects me makes it easier to understand why I want the media I get. Do you check in on these social networks and want the media you get as well?
Text: Media and Society
Fifth Edition
Michael O’Shaughnessy
Jane Stadler
|
Diagnostic Mammography
Diagnostic mammograms are follow-up exam that radiologists may recommend to provide a closer look at suspicious areas found in a screening mammogram. A diagnostic mammogram may rule out the presence of possible cancerous tissue or reveal that further diagnostic procedures, such as a breast biopsy, are needed to determine if cancer is present. Doctors may also order a diagnostic mammogram if a lump or other breast abnormality is discovered in a physical examination.
|
Score One for the Rural Hospitals
Score One for the Rural Hospitals
Posted October 2016
There have been extensive evaluations of the differences between hospitals located in rural areas and their urban counterparts. As the national debt grew in recent years and focus on the high cost of healthcare was thrust into the spotlight, the value of rural hospitals was scrutinized closely. Some in the industry questioned their ability to match the quality of urban hospitals. Some proposed that they should be shut down if they could not stand toe to toe with the urban and academic centers on quality. The Department of Health and Human Services (HHS), recently handed the rural hospitals of America some much needed good news on that front.
To have a discussion comparing rural and urban hospitals, some background is essential. It must be understood that the playing field is not level. There are dramatic differences in the population served by a particular hospital depending upon the geographic setting.
Depending on the definition, somewhere between seventeen and twenty-five percent of Americans live in rural areas. Rural residents have lower incomes, less employer-based health insurance, higher rates of Medicare and Medicaid coverage, and are less likely of having Medicaid even when qualified. They have higher rates of poor health indicators like alcohol and tobacco use at a young age. They are more prone to accidents due to the rural life-style. Rural residents have a higher incidence of a number of specific disease states like hypertension and cerebrovascular disease.
Access to care can also be problematic rural areas. Rural residents typically must travel farther to see a physician or to reach a hospital. While nearly a quarter of the US population lives in rural areas, only ten percent of physicians primarily practice in these areas. There are only about thirty percent as many specialists available in rural areas.
It would appear that rural hospitals face a significant challenge caring for this patient population with fewer resources. However, there is good evidence that rural hospitals incur higher costs, yet still manage to charge less than their urban counterparts. Vulnerable populations are receiving significant value for healthcare.
Still the question of quality remained. Historically, rural hospitals had difficulty generating sufficient numbers of cases to get a true statistical analysis of quality measures. A single poor outcome would often skew the numbers in an undesirable direction. In this era of accountability and reporting, quality comparisons have become more manageable. Large aggregates can be used to get a more robust picture of quality.
The HHS report utilizes the data being collected through several programs for comparison. The bottom line is that rural hospitals were less likely to get penalized for poor performance on hospital-acquired conditions such as falls or infections. They also fared better on value-based purchasing measures which include quality scores and patient surveys. While rural hospitals did have slightly higher readmission rates, it was less than a one percent difference.
There have been a number of reasons postulated for the better performance on these measures. Most agree that it likely stems from established personal relationships between providers and patients, higher levels of trust, more focused attention and the better organizational agility of smaller hospitals.
This news is certainly a shot in the arm of many rural facilities that are working hard every day to do more with less resources that are quite successful.
For a number of years prior to 2015, physicians faced cuts to reimbursement by Medicare because of what was commonly referred to as the “flawed” sustainable growth rate (SGR). The SGR was indeed flawed and failed to keep pace with the needed adjustments for Medicare payments to physicians. Every year doctors would face a reimbursement cut, not just from Medicare, but from almost all payers who typically tie their reimbursements to the Medicare rate. Every year Congress would ignore the problem until the last minute (or even after) and push the cuts off for another year. Each year the potential cuts grew larger and more ominous. Physicians pleaded for some relief from the SGR debacle.
In April of 2015, the Medicare Access and CHIP Reauthorization Act (MACRA) was passed by Congress. Initially, this was touted as the “doc fix” because it repealed the SGR and created a new paradigm under which physicians would be paid. Now, may physicians find themselves “in a fix” because MACRA has complicated their lives immensely.
On the surface, MACRA represents the shift from “pay for volume” to “pay for performance”. In reality, volume is still (and will likely always be) a significant factor for reimbursement. The term “performance” on the other hand is going to be something of a headache for most providers.
Medicare (CMS) and Health and Human Services (HHS) have already launched several programs over the past several years aimed at improving quality and efficiency (lower cost of care). Some of the include the Accountable Care Organization (ACO) and the Patient-Centered Medical Home. The results of these endeavors have been largely mixed, but they did create a foundation upon which collaboration and data collection and reporting could be accomplished. For providers already in those programs, there will be a little less stress in this transition.
The basics of MACRA:
MACRA is an attempt by CMS to pay physicians who perform well on metrics considered important to CMS and HHS more and physicians who don’t perform well less. MACRA is limited to Medicare Part B (outpatient care). It is budget neutral, so the winners will only get increased by the amount by which the losers get decreased. There are two models that physicians and groups must choose. The first is the Merit-based Incentive Payment System (MIPS) and the other is the Alternative Payment Models (APM’s). Because APM’s are somewhat tailored and unique, the focus will be on MIPS.
There are four basic areas of interest for MIPS. Each has a specific area of “performance” upon which the provider is judged. Each is weighted. Over time the weights shift, putting more importance on efficiency and less on quality. For these areas, CMS has created a list of metrics which believes will improve quality and efficiency based upon good evidence.
The first area is quality. The quality reporting category actually folds in the Physician Quality Reporting System (PQRS) that many physicians already know. Physicians must choose metrics from an approved list that is somewhat relevant to their specialty. Additionally, they must choose one that is more broadly applicable to all providers.
The second area is resource utilization. This is calculated by CMS based on Medicare Part B claims data. Essentially, CMS tracks all claims for a patient throughout the year. The physicians caring for that patient will be accountable for those costs. This is true regardless of how much involvement they had in the delivery of the care or the generation of the charges. Obviously, the incentive is to have primary care providers have greater discretion in their referral patterns to keep cost down.
The third area is clinical practice improvement. This is where the physicians in an ACO or Patient-Centered Medical Home get a pass. Those providers will automatically maximally qualify in this category. For the rest, there will more metrics to choose and report. There is a list of “Clinical Practice Improvement Activities” from which at least three must be chosen and executed for a minimum of ninety days during the reporting period.
The fourth are is related to technology. Previously, this fell under “Meaningful Use”. Physicians must report measures based upon the existing “certified EHR technology” (CEHRT) program to attest that their electronic record system meets appropriate standards.
The moral of the story for many physicians…be careful what you wish for. In the case of MACRA, the CMS payment model was kicked down the road for years. Very abruptly it came to sharp curve in the road and many providers feel as though they have been thrown from the vehicle. A good deal of money, time, resources and education must be dedicated to capturing and reporting numerous measures to avoid increasing pay cuts. Many are scrambling to make decisions about joining ACO’s or other alternative payment model groups.
Ultimately, quality of care and efficiency must improve for the US healthcare system to remain viable. Physicians are smart, hardworking and resilient people who will find a way to navigate the turns…albeit on two wheels at times.
|
Zenfolio | Sugato Mukherjee | THE LIVING GODS OF MALABAR
Theyyam is the corruption of the word ‘Daivam’ that means God in Malayalam, Kerala’s state language. An incredibly vibrant tradition that has been in practice for the last 1500 years, Theyyam is a deep rooted folk religion of Malabar, the northern region of Kerala. Where the Theyyam practitioner or the Oracle, in a state of exalted trance, becomes the physical manifestation of a deity. He is believed to be possessed with divine powers to heal, foretell the future and confer blessings on devotees. A Theyyam practitioner traditionally hails from scheduled castes, who are socially disadvantaged communities in Kerala where caste divisions are still very strong. But interestingly, during Theyyam, the oracle is worshipped as a living deity by all classes of people, even by the Brahmins, the community at the helm of the social system.
A visually rich cult with explosive colours of face and body painting of the practitioner, Theyyam is also a religious art with its dense layers of imagery and symbolism, where ancient chants are employed in conjunction with the ritual use of music, richly coloured fabrics and dance to visualize and invoke the deity.
Gods and goddesses are believed to be invoked in flesh in the religious cult of TheyyamLong makeup sessions involves painting the face and body of oracle with vegetable dyesPost make up, the oracle ceremonially gazes into a mirror as the incarnate deityWorkers preparing the god's costume using richly coloured fabricsA Fire Theyyam with 16 fire torch around the waist upon a sheath of coconut leaves.Another fire Theyyam where the performer jumps repeatedly through the bonfireThe oracle struts, pirouettes and jabs frenetically and talks in a different voice toneDance by the oracle in a huge red-gilt, mirrored headdress accompanied with drumbeatsAs a Living God, the oracle is worshipped by all classes of people, even by the BrahminsAn oracle struts through the crowd, chanting blessings as the devotees gather around himTwo kamerans, or the servants of the God, hold the oracle as he dances on a stoolOnce out of the trance, eyes of the oracle are nursed with cold waterA Theyyam practitioner removes his make up. He is a well-digger by profession
|
Pulmonary Medicine
Asthma (include occupational asthma): Pathogenesis and Epidemiology
Jump to Section
What every physician needs to know:
Despite significant recent advances, a unified understanding of the physiology, histology, and immunology of asthma remains elusive, in part because of the lack of a clear correlation among symptoms, triggers, and treatment efficacy. However, several characteristics of asthma are common to many phenotypes, including airway inflammation, airway hyper-responsiveness, and reversible airflow obstruction.
Definition of Asthma
Asthma is a heterogeneous disease, whose pathologic features most often include reversible airway obstruction, chronic airway inflammation, bronchial hyperreactivity (BHR), and airways remodelling. With BHR, bronchospasm is easily initiated in response to various triggers and is likely the result of underlying chronic airway inflammation. Unfortunately, in children, BHR is difficult to assess; in adults, factors other than asthma, such as chronic bronchitis, may cause BHR.
Asthma in Children
Asthma in Adults
Symptoms of adult asthma include dyspnea, chest tightness, cough, and wheezing. However, just as in children, not all symptoms may be present in an individual patient. Assessment of BHR in adults is much easier than it is in children, and BHR is often the basis on which the diagnosis of asthma is made. However accurate testing requires properly performed spirometry, including a good effort by the patient. Pulmonary function testing reveals a decreased ratio of Forced Expiratory Volume in one second (FEV1) to Forced Vital Capacity (FVC). An FEV1/FVC ratio less than 70 percent is considered an “obstructive” pattern. FEV1 is the used to grade the degree of obstruction. An improvement in FEV1 or FVC of at least 12 percent and 200 mL following bronchodilator administration is also highly suggestive of asthma.
Asthma-COPD Overlap
There is a newly identified entity know as asthma-COPD overlap syndrome (ACOS), which includes patients with features of both COPD and asthma. These diagnoses have classically differed in their pathophysiology and clinical presentations. For details please refer to other chapters (Asthma: Clinical Manifestations and Management and COPD: Clinical Manifestations and Management).
Briefly, as noted below asthma is classically thought to be due to an IgE and Th2 mediated pathway. COPD on the other hand is marked by neutrophilic and CD8+ infiltration. Clinical presentations tend to vary by age (asthma tends on occur earlier in life) and clinical risk factors (e.g., allergens trigger asthma vs. tobacco use predisposing to COPD). Symptoms are more often episodic in asthma and chronic in COPD. However, recent investigation has shown that these two diseases are not always completely distinct.
Airway reversibility was thought to be a hallmark of asthma. However, over time airways remodelling in asthma can lead to irreversible airways obstruction, and there is a subset of COPD patients with a bronchodilator response. Patients with COPD may also have an allergic phenotype. As such, ACOS is becoming an accepted clinical entity and is thought to make up 15-45% of obstructive airways disease. Both diseases can show airways hyperreactivity, and patients with COPD can also have eosinophilic airways inflammation and atopy. It is suggested that ACOS responds better to inhaled corticosteroids than COPD alone, but current guidelines do not yet address management of ACOS.
Occupational asthma
Occupational or work-related asthma is the most common chronic occupation lung disease in the United States occurring in 250-300 cases per 1 million people per year. It occurs most often in the manufacturing industry and is also commonly seen in health care and education. Occupational asthma is defined as asthma triggered by allergens isolated to the work place environment. Once a patient is sensitized to the trigger, very low dose exposures can trigger symptoms, which are often accompanied by allergic rhinitis and conjunctivitis. There are, however, other exposures that produce irritant-induced asthma symptoms and are not associated with the allergic phenotype.
Occupational asthma is an under diagnosed and undertreated clinical entity. About 16% of all adult-onset asthma is attributed to occupational asthma, and it should be considered in all cases of adult-onset asthma.
See chapter, Asthma: Clinical Manifestations and Management.
Beware: there are other diseases that can mimic asthma:
In adults, other diseases, such as COPD, are characterized by obstructive lung disease. However, in COPD the improvement in airflow obstruction following bronchodilator is smaller and less complete than in asthma. Other diseases may mimic or co-exist with asthma: Allergic Bronchopulmonary Aspergillosis (ABPA), aspirin-exacerbated respiratory disease (AERD), and vocal cord dysfunction (VCD), among others, must be considered.
How and/or why did the patient develop asthma?
The pathophysiology of asthma is characterized by: bronchoconstriction, airway edema, airway hyperresponsiveness, and lastly airway remodeling. Pathological examination of the lungs may identify cellular and other components of airway inflammation. Nearly 50% of all asthma is allergic asthma.
What is Airway Inflammation?
Role of Eosinophils
Table 1.
Role of Cell Products in Asthma
Role of Neutrophils
Role of Macrophages
Role of Mast Cells
Role of Lymphocytes
Lymphocytes, particularly CD4+ helper T cells, direct immune system activation. Asthma is considered an atopic disease in about half of all cases, which results in so-called Th2 skewing, a CD4+ profile that favors Th2 over Th1 expression. Patients with this asthma phenotype have a Th2 imbalance that results in T-cell-production of IL-4, IL-5, and IL-13, leading to local IgE class-switching by B-cells and eosinophil infiltration. Additionally, type 2 innate lymphoid cells (ILC2s), another newly discovered cell type that is distinct from Th2 cells but acts similarly, are stimulated by IL-33, IL-24, and IL-2 and potentiate the Th2 response (Table 1). As such, ILC2s are implicated in allergic asthma as well. Together, in addition to the inflammatory pathway, Th2 and ILCs also play a role in maintaining/restoring epithelial integrity after injury. However, Th2 skewing is neither necessary nor sufficient for development of asthma, and recent evidence suggests that Th1 may also have a role in asthma pathogenesis.
IFNγ-induced promotion of Th1 responses does not decrease asthma severity. IFNγ itself has been found to be elevated in some patients during asthma flares, and those with severe asthma have a fourfold increase in IFNγ-positive T-cells compared to mild or moderate asthma. Together, these data suggest that Th1 activity is also implicated in some subtypes of asthma, particularly asthma that is more difficult to control.
Other T-cells, namely Th9, Th17, Th22, and regulatory T-cells, have also been more recently implicated in asthma. Th9 cells, which secrete IL-9, are also upregulated in asthma and seem to be involved in allergic inflammation, While IL-9 and Th9 cells can propagate Th2-induced inflammation, IL-9 inhibition does not decrease airways inflammation. Therefore, the Th9 cell is not sufficient for asthmatic inflammation, and instead may be a marker of disease severity. Th17 cells secrete IL-17 and may lead to formation of dual positive Th2/Th17 cells. The presence of these cells in BAL fluid is related to increased peripheral eosinophilia is some asthma patients. IL-22, which is secreted by Th22 as well as other T-cells, is implicated in asthma. However, IL-22 can be both protective and pro-inflamatory, so further study needs to investigate the role of IL-22 and Th22 in asthma. Lastly, absence or dysfunction of regulatory T-cells may also play a role in asthma pathogenesis through failure to promote tolerance to inhaled agents, including allergens.
Role of Epithelial Cells
What is Airway Hyper-responsiveness?
The bronchial smooth muscle in asthma is more likely to constrict in response to stimuli compared with normal airways. Such heightened responsiveness can be demonstrated by performing provocative lung testing, such as methacholine challenge testing (MCT). Inhalation of methacholine, a cholinergic agonist, produces more bronchoconstriction in response to a given dose in asthmatics than in individuals without asthma. Just as exogenously administered chemical agents may cause bronchoconstriction, other stimuli, such as cold air, particulates, allergens, and physical stimuli like exercise, may also induce contraction of bronchial smooth muscle.
Airway Hyper-responsiveness in Asthma
Cause of Airway Hyper-responsiveness
Airway inflammation is believed to underlie bronchial hyper-responsiveness. The number of inflammatory cells (including eosinophils, neutrophils, and T-cells) recovered from bronchoalveolar lavage fluid from asthmatics who undergo bronchoscopy correlates with airway hyper-responsiveness. Adequate treatment with an inhaled corticosteroid decreases hyper-responsiveness, presumably by reducing cellular infiltration and resulting inflammation. However, the effect that inflammation has on airway smooth muscle cells and underlying connective tissues responsible for the hyper-responsiveness, and whether hyper-responsiveness is a direct effect of the inflammation or a consequence of long-term, chronic injury remains unknown.
The Role of Airway Hyper-responsiveness in Asthma Symptoms
Airway hyper-responsiveness leads to airway obstruction, largely a reversible process, with essentially normal airway lumen size in between asthma exacerbations. Notably, however, some degree of irreversible airway narrowing can occur over time as a result of airway remodelling, as described previously. Other factors, such as mucus plug formation, may also affect lumen patency.
Treatment of airway inflammation with oral or inhaled corticosteroids is aimed at reducing airway inflammation and mitigating airway remodelling, airway hyper-responsiveness, and mucus production. In contrast, use of inhaled beta-agonists and other such agents targets a reduction in smooth muscle contraction. Although important in treating acute symptoms, the latter group of medications is short-acting and does not address the underlying cause of asthma symptoms.
Finally, other agents target specific pathways involved in inflammation; examples include leukotriene antagonists (e.g., montelukast, zafirlukast, and zileuton), antihistamines, and anti-IgE monoclonal antibody (e.g., omalizumab). Since each of the affected pathogenic pathways plays a variable role in each asthma patient, the clinical benefit of each class of medication varies from person to person.
Airways remodelling
One consequence of chronic airway inflammation in asthma is so-called "airway remodeling," the histologic and cellular characteristics of which can be seen in specimens obtained through biopsy, although not routinely used for clinical purposes in diagnosing or managing asthma. The following cell types are involved in airways damage and remodelling.
Epithelial Damage
Submucosal Remodeling
Smooth Muscle Hypertrophy
Smooth muscle hypertrophy occurs throughout the airways of asthmatics. Much like the increase in subepithelial connective tissue cells, smooth muscle cell hypertrophy responds to chronic inflammation and repair signals, such as EGF. Repeated, aberrant smooth muscle contraction may also lead to an increase in muscle mass over time. There is increasing recent evidence that the mechanical stress from bronchoconstriction or ventilator-induced lung strategy also result in smooth muscle hyperplasia over time.
Consequences of Airway Remodelling
Airways regmodelling is a complex process and the driving force behind airway remodelling in asthma remains unclear. Regardless of the cause, thickening of the airways combined with increased smooth muscle contraction leads to airway narrowing and obstruction of airflow. Overtime, this process results in more severe disease that is less responsive to therapy and causes increased morbidity and mortality.
Which individuals are at greatest risk of developing asthma?
Important socioeconomic risks for development of asthma include:
• Childhood
• Black race, Latino ethnicity
• Lower income
Important medical and environmental risks for development of asthma include:
• Allergic diatheses
• Family history of asthma or allergies
• Eosinophilic disorders
• Obesity
• Maternal smoking during pregnancy
• Premature birth
Important environmental factors that decrease risk of asthma include:
• Maternal vitamin D supplementation in the third trimester
• Infant and childhood exposure to dogs and farm animals
Overall Prevalence of Asthma
From 1980 to 1997, asthma prevalence in the United States increased significantly, from 4 to 6 percent. Changes in the National Health Interview Survey in 1997 make direct comparison of more recent and historical data impossible, but asthma prevalence since 2001 also has increased (See Figure 1). However, the rate of asthma attacks, which may be a more important measure, has remained constant, suggesting either recent improvements in asthma management or inclusion of less severe phenotypes.
Figure 1.
US Asthma Prevalence
Asthma Prevalence by Age and Gender
Figure 2.
Age Specific Asthma Prevalence
Figure 3.
Prevalence of Asthma by Subgroup
Figure 4.
Asthma Mortality by Race/Ethnicity
Asthma Prevalence by Racial and Ethnic Group, Geography, and Income
Asthma prevalence varies significantly by ethnic group. Asthma is most prevalent among African Americans, who are most at risk for racial discrimination, poorer access to health care, and resultant worse long-term outcomes from asthma. In the United States, asthma affects 8.2 percent of non-Hispanic whites, 11.1 percent of non-Hispanics blacks, and 6.3 percent of Hispanics. Among Hispanics, the prevalence varies widely, with 16.6 percent of Puerto Ricans affected compared with only 4.9 percent of those of Mexican heritage. The reasons for these differences are not clear.
Obesity and Asthma
Those who are obese have increased incidence of asthma. Obesity is also closely linked to lower socioeconomic status and racial disparities, but obesity alone portends a worse prognosis among those diagnosed with asthma even when race and income are controlled for. It is hypothesized that since obesity results in a proinflammatory state, there is greater risk of developing asthma due to overlap in activation of common inflammatory pathways. Obese children and adolescents also tend to be less bronchodilator responsive and have larger symptom burden compared to non-obese asthmatic controls.
Maternal Factors
Studies indicate that maternal smoking during pregnancy leads to increased risk of childhood wheezing. On the other hand, vitamin D supplementation during pregnancy has a trend towards decreased incidence of childhood wheezing, although the initial data did not reach statistical significance.
See chapter, Asthma: Clinical Manifestations and Management.
See chapter, Asthma: Clinical Manifestations and Management.
Pulmonary function testing, including provocative lung testing (e.g., methacholine challenge), can be diagnostic of asthma. However, some patients with asthma may have negative pulmonary function tests. For more details, see chapter, Asthma: Clinical Manifestations and Management.
See chapter, Asthma: Clinical Manifestations and Management.
Although asthma is characterized by a familial predisposition, the disorder is polygenic and is associated with complex inheritance patterns that are strongly influenced by gene-environment interactions. There are no diagnostic genetic tests that can be performed, but it is prudent to understand gene associations, which relate to pathogenesis and clinical course of the disease.
Genetics Versus Environmental Influences in Asthma
Gene Associations in Asthma
No single gene has been convincingly identified as being causally related to asthma in a large percentage of patients. Rather, asthma is thought to be a complex disease that results from the contributions of many genes. Additionally, epigenetic changes may also play a role, as histone modifications have been associated with bronchial hyperresponsiveness and corticosteroid resistance in asthma.
See chapter, Asthma: Clinical Manifestations and Management.
What is the prognosis for patients managed in the recommended ways?
Consequences of Asthma
Asthma results in considerable morbidity and utilization of health care resources. In 2009, asthma-related outpatient physician visits totalled 7.8 million for adults and 7.5 million for children. In the same year, 1.1 million adults and 0.6 million children had asthma-related emergency room visits, and asthma-related hospitalizations exceeded 299,000 for adults and 157,000 for children.
A large case control study showed that adults with asthma have increased all-cause mortality compared to non-asthmatic controls over a 25-year study period, and most of the deaths in the asthmatic group were due to obstructive lung disease or status asthmaticus. Risk factors for death were older age, lower FEV1, large degree of bronchodilator response, elevated peripheral eosinophil count at time of enrolment, and prior hospital encounters for asthma.
Although black and white asthmatic patients generate the same relative numbers of ambulatory visits for asthma, black patients are more than three times as likely to visit the emergency room and nearly twice as likely to be hospitalized as white patients. Finally, of the 3447 asthma deaths reported in 2007, 185 were children. Among children with asthma, black children have over twice the likelihood of hospital readmission than white counterparts. Asthma-related mortality is higher among blacks than among other ethnic groups. Research has shown that socioeconomic disparities play a large role in the clinical outcomes of black children with asthma. Additionally black children have increased exposure to allergens, less use of long-acting bronchodilators, and increased reliance on rescue inhalers.
Employed adults with asthma miss more than 14 million workdays annually, and nearly 34 percent of adult asthmatics miss at least one day of work annually. An additional 22 million days of household work or other work are lost annually among asthmatics who are not employed outside the home. Estimates suggest that asthma-related costs were more than $56 billion in 2007, or about $3300 per asthmatic.
Natural History of Asthma
In the majority of cases, asthma begins during childhood. During the first three years of life, half of children have wheezing associated with a viral upper respiratory infection, a small proportion of which develop true asthma. Most children have resolution of their asthma as they grow. Children who are born prematurely can outgrow their asthma, but a significant number continue to have asthma-associated symptoms through adolescence and beyond or have recurrence of disease as an adult.
Unlike children, adults diagnosed with asthma rarely have complete remission of their disorder. The risk of asthma-related death increases with age and may be underreported. Asthma is a source of significant morbidity and may be life threatening, especially when it is present with other comorbidities, such as cardiac disease and severe food or medication allergies.
What other considerations exist for patients with asthma?
See chapter, Asthma: Clinical Manifestations and Management.
Related Resources
Regimen and Drug Listings
Bone Cancer Regimens Drugs
Brain Cancer Regimens Drugs
Breast Cancer Regimens Drugs
Endocrine Cancer Regimens Drugs
Gastrointestinal Cancer Regimens Drugs
Gynecologic Cancer Regimens Drugs
Head and Neck Cancer Regimens Drugs
Hematologic Cancer Regimens Drugs
Lung Cancer Regimens Drugs
Other Cancers Regimens
Prostate Cancer Regimens Drugs
Rare Cancers Regimens
Renal Cell Carcinoma Regimens Drugs
Skin Cancer Regimens Drugs
Urologic Cancers Regimens Drugs
Sign Up for Free e-newsletters
|
to hold your tongue
Idiom Definition
Idiom Definition - hold your tongue
"to hold your tongue"
to refrain from speaking
Related words and phrases:
Idiom Scenario 1
Idiom Definition - hold your tongue
Two colleagues are talking ...
Colleague 1: I think we should let the intern join us for this meeting. It will be most educational.
Colleague 2: I agree that the intern could learn a lot but we have to caution him to, at all times, hold his tongue.
Colleague 1: I agree. Not having the big picture, the intern could say something that would make us all look bad.
Idiom Scenario 2
Idiom Definition - hold your tongue
Two friends are talking ...
Friend 1: Are you and your wife fighting again?
Friend 2: Yes. And, really, it is over nothing important. She comes home after a long difficult day and I know she is stressed out and then she says something to provoke a quarrel and I respond stupidly and a fight begins.
Friend 1: Sounds like you need to learn to hold your tongue and just let your wife express herself without you reacting.
Friend 2: I agree. I just need to keep my mouth shut and let it pass.
Test Your Understanding
to hold your tongue - Usage:
Usage Frequency Index: 536 click for frequency by country
to hold your tongue - Gerund Form:
Holding his tongue, he chose not to contradict the boss in front of the other staff.
to hold your tongue - Examples:
1) ... better person, no matter the circumstances. Walk away, hold your tongue, be the solution, not even remotely part of the problem.
2) If you don't know, hold your tongue and ill-fated opinions.
3) Hold your tongue -- that kind of talk can hurt me professionally, since we owe this clinic ...
4) Queuing for what can seem an eternity? Patience. Hold your tongue, stop sighing, rolling your eyes, shuffling from foot to foot.
5) If you can hold your tongue and control your actions when you are insulted, your Taekwondo training is helping.
6) Will you hold your tongue, sir, and let me speak?
7) Hold your tongue, obey orders... and don't take any of it too seriously.
8) ... give your child a verbal lashing when she engages in undesirable antics, hold your tongue. In fact, shouting may only produce more negative behaviour.
9) ... it might be best to hold your tongue -- and to start gathering information.
10) ... state, changing your behaviour, and getting a different outcome. Hold your tongue; Rather than just blurting out what you feel.
11) ... their recollection of the events can be impaired. It is wise to hold your tongue until you have had time to gather your thoughts.
12) You cared about them at some stage, so hold your tongue when you're tempted to lash out to all and sundry.
13) If you hold your tongue, you keep silent even though you want to speak.
14) Make sure, you hold your tongue however the frustration maybe.
15) Now the real issue is how do you hold your tongue when everything inside you wants to get mad and take it out on him.
16) My advice to you young fella is to hold your tongue, obviously you know nothing about our culture otherwise you would show more respect.
17) So if you want to keep your account, hold your tongue on that stuff. We're not interested.
19) However, if you hold your tongue and say nothing, but instead express understanding, then you will have a chance ...
20) ... struggling to survive like the people you pity so much are. Hold your tongue. Who are you to tell someone whom you've never met that their suffering ...
|
Hepatitis is a condition that causes liver inflammation and can be due to alcohol, drug use or certain health problems. However, a virus is most often responsible for hepatitis. The most common types of viral hepatitis include A, B and C:
Hepatitis A
This condition is highly contagious and can be spread from person to person. Fortunately, this form of hepatitis only causes a mild infection and many people with it don’t even know they are sick. Many times hepatitis A will go away on its own without causing long-term damage to the liver. Hepatitis A is spread through food or water and common food culprits include fruits, vegetables and shellfish.
Hepatitis B
Those with hepatitis B often experience milder symptoms for a short period of time before getting better on their own. Unfortunately, not everyone is able to shed the virus and the infection can become chronic. Hepatitis B can lead to other complications like liver failure or cancer. This form of hepatitis is often spread through unprotected sex, but it’s also possible to spread through contact with blood or other bodily fluids.
Hepatitis C
Only about one-fourth of people who develop hepatitis C actually get rid of it on their own, while the majority will carry this virus for the rest of their lives. Just like chronic hepatitis B, long-term hepatitis C can also cause other issues like liver failure and cancer. Hepatitis B can be transmitted through blood, such as sharing needles.
While symptoms don’t always have to be apparent in the first few weeks after being infected when the symptoms do manifest you may experience,
• Nausea
• Lack of an appetite
• Stomach pain
• Fatigue
• Low-grade fever
• Jaundice (yellow skin or eyes)
When hepatitis B and C are chronic a person may not experience any symptoms for years, but by the time there are symptoms the liver may already be damaged.
Treating Hepatitis
Hepatitis A will almost always go away on its own without treatment. Chronic hepatitis B will require monitoring and treatment to prevent it from damaging the liver. While not always prescribed, sometimes antiviral medications can help.
Chronic hepatitis C will also require treatment. There are several FDA-approved medications for treating this condition and to prevent liver complications. Even though the virus isn’t gone, the best-case scenario is that the virus won’t show up in the blood six months after your treatment ends.
If you have hepatitis and are interested in finding out your treatment options then it’s time to call our South Side office at (773) 238-1126.
|
Essence of Mind and Body for Descartes
Essay by ilnet2000A+, December 2006
download word file, 4 pages 0.0
Downloaded 86 times
The mind/body problem has been a major topic debated by many esteemed philosophers over the past centuries. Many philosophers have attempted to uncover the truth between the disparity between the mind and the body; however their endeavors were futile. The idea of mind is difficult topic to apprehend because of its abstract nature. A philosopher by the name of Rene Descartes is one of the few thinkers to dissect and assess the mind/body problem with his argument for dualism, which is the possibility that the mind was distinct from the body. In Descartes' Mediations on First Philosophy, he is able to construct his arguments for the real distinction of mind from body, by understanding how the physical world operates based on the notions he sets. His claim will eventually be refuted because of his lack of understanding of how a non-extended mind affects an extended body
Descartes' main purpose in writing the mediations was to ascertain "certainty."
In order for him to do so, he used systematic doubt to find something that was truly certain. Descartes had a number of reasons or sources for doubt. First, he reasoned that our sense are dubious, thus we cannot rely on them to make judgments on the physical world. This meant that the physical world he knew was also dubious because he experienced the world through sensory applications. Second, he states it is very possible that we do not know the difference between what is real and what is not, and we could be living in a dream. Another source for doubt for Descartes is an internal defect meaning that we do not know if there is something wrong with us. He might just be insane. His last doubt is the possibility of an external defect, which is a mind-independent reality that can...
|
Important Changes in Japan During the 20th Century.
Essay by irish_hoosierUniversity, Bachelor's November 2005
download word file, 7 pages 5.0
Downloaded 117 times
The 20th century was by all accounts an era of considerable progress for Japan. As a result of the remarkable success in the postwar era, Japan has become a model of the industrialized society for the world to take note. In this paper I will attempt to illustrate the important changes that Japan went through during this time of progression, using 1945 as a dividing point. These changes include a different role of the emperor, a new political system, social reform and the rise and downfall of the economy.
Emperor Hirohito (1901-1989) was the emperor of Japan from 1926 to 1989. He chose to designate his reign with the term "Showa" (Enlightened Peace) and he is sometimes referred to as the Emperor Showa. His reign was the longest of any monarch in Japanese history.
Under the Japanese political system before World War II, the Emperor was in theory all-powerful.
The emperor was sovereign, and everyone who worked for the government in effect worked for the Emperor. That meant, in effect, that power was divided among several different groups within the Japanese political system, most importantly, the military, the civilian bureaucracy, and to some extent the Diet, the Japanese parliament.
In the pre-war period, before the Second World War, the fact that the Emperor was in theory all-powerful meant in effect that those groups who could claim to speak for the Emperor were the ones who were in fact all-powerful. So that we know in the 1930s it was the Japanese military, which claimed to speak on behalf of the Emperor, which managed to secure virtually all political power unto itself.
After the Japanese surrendered at the end of World War II in 1945, American forces under Gen. Douglas MacArthur occupied Japan until 1952. During this occupation Japan was forced...
|
occurring, done, appearing, or published twice a week:
semiweekly visits.
noun, plural semiweeklies.
a semiweekly publication.
twice a week:
He traveled semiweekly to Detroit.
Read Also:
• Semiyearly
adjective 1. semiannual (def 1). adverb 2. twice a year; semiannually: He seeded the lawn semiyearly. adjective 1. another word for semiannual
• Semmelweis
noun 1. Ignaz Philipp [ig-nahts fee-lip] /ˈɪg nɑts ˈfi lɪp/ (Show IPA), 1818–65, Hungarian obstetrician. noun 1. Ignaz Philipp. 1818–65, Hungarian obstetrician, who discovered the cause of puerperal infection and pioneered the use of antiseptics Semmelweis (zěm’əl-vīs’) Hungarian physician who was a pioneer of sterile surgical practices. He proved that infectious disease and death in […]
• Semmelweiss
Semmelweiss Sem·mel·weiss (zěm’əl-vīs’), Ignaz Philipp. 1818-1865. Hungarian physician who pioneered the use of antiseptics in obstetrics as the result of his discovery that puerperal fever is a form of septicemia. His methods reduced mortalities but were not widely adopted until after his death.
• Semmes
noun 1. Raphael, 1809–77, Confederate admiral in the American Civil War.
|
Civil Rights
As the debate over Confederate statues continues, Professor Peniel Joseph talks about historical context of the Confederacy. Removing the statues, he says, represents "honoring the best of our history and not trying to somehow scrub or efface that history."
Just past the southern gates at the Texas State Capitol stands the Confederate Soldiers Monument, a symbol of the men who gave their lives in the name of the south.
Confederate President Jefferson Davis towers above them. That statue was erected nearly 40 years after the end of the Civil War.
"Jefferson Davis was seen in the 1910s and 20s not as a traitor, but as a courageous defender of white rule and that's what those statues were designed to maintain. Some of them even say that," says University of Texas history professor Jeremi Suri.
If history is any indicator, the Charlottesville tragedy, in which one protester was killed and more than dozen were hospitalized, could be the spark that ignites change, according to Jeremi Suri, a University of Texas at Austin professor of history and public affairs.
After a White House breakfast celebrating Black History Month, President Donald Trump's comments on Frederick Douglass are one example of why Black History Month still matters, says Professor Peniel Joseph.
|
Late Pleistocene lacustrine fan-deltas which developed in an arid, disequilibrated submergent environment along the western fault scarp of the Dead Sea Rift are characterized by: 1) alluvial fan deposits of crudely stratified conglomerate beds and horizontal and cross-bedded sandstones. The conglomerates and sandstones are cone-shaped and comprise the entire sequence in the fan head area; 2) fan-front thick and thin interlayers of ripple cross-laminated sandstones and mudstones which constitute a sedimentary belt a few kilometers wide in front of the fan; and 3) fan-influenced detrital laminated chalks. Both the interlayered and the laminated units appear as widespread sheet bodies whose basal contacts are gradational and upper contacts, sharp and disconformable. The rapid rise of lake level resulted in the buildup of upward-fining sequences. At the end of the Pleistocene, the lake level dropped; it began to rise again during the Holocene and, contemporaneously, new fan deltas began building up.
You do not currently have access to this article.
|
April is Parkinson’s Disease Awareness Month: How Can YOU Make a Difference?
Jayne Latz of www.speechassociatesofnewyork.com discusses Parkinson's Disease, explaining its effects and how you can help.Did you know that currently 1 million people in the United States are living with Parkinson’s Disease? Each year, the Parkinson’s Disease Foundation declares April Parkinson’s Disease Awareness Month in an attempt to raise public awareness of this devastating disease. In support of the PDF’s awareness campaign, this week we’ll talk about what Parkinson’s Disease is, how it affects communication, and how you can help raise awareness.
What is Parkinson’s Disease? Parkinson’s Disease is a progressive neurological disorder. The disease causes cell death in a specific part of the brain that controls movement. Parkinson’s causes tremors, slow movement, stiffness, and a loss of balance and coordination. As the disease progresses, disturbances in mood, cognition, and behavior may appear as well.
How does Parkinson’s Disease affect communication? The motor disabilities that accompany Parkinson’s Disease can affect the way a person speaks. Speech may become slurred or unintelligible as the disease progresses. A loss of vocal volume is also common (a hallmark of those with Parkinson’s)—a person with Parkinson’s will often develop an abnormally quiet voice, making it difficult to communicate effectively. With decreased breath support comes what is known as “fast rushes of speech” as the individual rushes to say what they need to say. This often causes speech that is unclear. In addition to voice and speech problems, a person with Parkinson’s disease may also have difficulty swallowing, as they lose coordination in the muscles of the mouth and throat. A speech-language pathologist can help with each of these issues, providing techniques to improve speech clarity, increase breath support for stronger vocal volume, and teaching compensatory strategies to improve safety in swallowing.
What can you do to help? The Parkinson’s Disease Foundation has provided an on-line toolkit to help get you started raising awareness for Parkinson’s Disease Awareness Month. Check it out at http://www.pdf.org/parkinson_awareness
Has Parkinson’s Disease affected your life or the life of someone you love? What challenges have you faced and how have you worked through them? Share your story in our comments section below.
For information on our New York based Speech-Language Pathology services, please call Speech Associates of New York today at (212) 308-7725 or visit our website at www.speechassociatesofny.com and find out how our team of professionally trained and certified speech-language pathologists can help you communicate your best!
JL-round-2Jayne Latz
Speech Associates of New York
(212) 308-7725
Leave a Reply
WordPress.com Logo
Twitter picture
Facebook photo
Google+ photo
Connecting to %s
|
When you think of German vehicles, odds are that you're picturing big Mercs and Bimmers speeding on the Autobahn. However, the German market is quite diverse and there's a good supply of alternative fuels to be had. You can get most models powered with gasoline, diesel, natural gas (CNG) or liquified petroleum gas (LPG, also called Autogas). While the debate between the first two choices is quite old, how do you choose between the two latter alternatives? While you can convert a gasoline car to run on LPG, you can't do this with CNG. So which one to choose?
First, let's speak about the cost of fuel itself. According to TÜV Süd, a kilogram of natural gas has the same energy content as 1.5 liters of gasoline. When it comes to costs, the average price of driving with CNG is half of the cost of gasoline. As for LPG, two liters have the same energy as 1 kg of natural gas. This makes LPG about 30 percent more expensive than CNG (and about 35 percent less than gasoline).
There's more after the jump.
[Source: Auto News]
We can't ignore the different cost of the LPG/CNG conversions. Usually, a gasoline-to-LPG conversion costs between €1,800 to €3,500 although some brands are starting to offer them as standard equipment, guaranteed. No CNG conversions are offered and these models usually have a €2,800 to €5,500 surcharge from its gasoline counterparts, although sometimes they cost about the same as a diesel model. A good rule of thumb: if you drive less than 10,000 km per year, stick with a fuel efficient gasoline vehicle.
What about the environmental balance? Both CNG and LPG have the smallest environmental impact among the fossil fuels. Their emissions of CO2 per km are usually 25 percent lower than equivalent gasoline cars, and smog production is about 80 percent less. Particulates, NOx, carbon monoxide and sulfur dioxide emissions are almost zero. Germany is also investing in obtaining biogas from renewable sources, usually waste, which would make CNG more interesting.
Currently, up to 3,000 gas stations in Germany offer LPG pumps, compared to 800 stations offering CNG (although this number is increasing). There are tax advantages to using these fuels, too, although LPG might lose its privileged status in 2018. A taxation scheme based on CO2, which will be introduced in Germany soon, might keep these two fuels as a viable options.
Share This Photo X
|
Labor Unions Research Paper Starter
Labor Unions
At the end of World War II, one in three working Americans had a union card. This however, proved to be the high water mark for organized labor, which in 2012 could claim the loyalty of only about one in nine Americans. Its decline stems in part from massive structural shifts in the economy, increasingly business-friendly government policies, the advent of the knowledge and service worker, changing demographics and lifestyle, and, regrettably, the unions' own inability to prove their relevancy and value to workers. The question is: can the labor movement recover?
Keywords AFL-CIO; Blue Collar Workers; Closed Shop; Collective Bargaining; Craft Unions; Deskilling; Expectancy-Value Theory; Frustration-Aggression Theory; Industrial Unions; Interactionist Theory; Lockout; Open Shop; White Collar Workers
In 1945, slightly more than one in three U.S. private-sector employees belonged to a union. By 1995, this ratio stood at slightly more than one in ten (Strauss, 1995, p. 330). This is not just an American phenomenon: between 1970 and 2003, the union share of the British workforce dipped from 44.8 percent to 29.3 percent, the French from 21.7 to 8.3 percent, and the Japanese from 35.1 to 19.7 percent. By comparison, 23.5 percent of the U.S. workforce was unionized in 1970; in 2003 this figure stood at just 12.4 percent (Viser, 2006, p. 45). In a span of just ten years, from 1985 to 1995, union membership in the United States declined 21 percent. In fact, union membership in seventy-two of ninety-two countries surveyed in 1995 by the International Labor Office had declined in the previous ten-year period (Epstein, 1998, p. 13).
All told, according to the Bureau of Labor Statistics, 14.4 million Americans belonged to a union in 2012. Nearly 36 percent of the public sector was organized; within the public sector, 41.7 percent of local government employees — most notably teachers, police officers, and fire fighters — belonged to a union. Just 6.6 percent of the private sector workforce was unionized. The highest participation rates here came in the transportation and utilities industry (20.6 percent) and construction industry (13.2 percent). The lowest rates came in the financial services industry (1.9 percent) and agriculture (1.4 percent). Age mattered across industries, with the highest rate among workers aged 55 to 64 (14.9 percent), and the lowest among those aged 16 to 24 (4.2 percent) (Bureau of Labor Statistics, 2013).
Given all these statistics, one cannot help but ask: Why do workers still join a union? What larger economic and political purpose do unions continue to serve, if any? What precipitated such a dramatic decline, and is it irreversible?
To a sociologist, a union is one way people band together for protection and mutual succor. It imposes order, imparts values, and ensures its and therefore its members' survival, like other institutions. A union deliberately sets about creating a monopoly in the supply of labor in order to set prices (i.e. wages) and other conditions of exchange. It behaves exactly like a business would. In the jargon of industrial relations, labor and management each seek to strengthen its bargaining power at the other's expense. The political scientist will explain how the union, over time, became an influential constituency in its own right. In the larger context, the labor movement counters capitalism's worst excesses, preventing them from destabilizing the social order and thus the state's claim to legitimacy (Sullivan, 2006).
Theories on Union Participation
The decision to join a union, social psychologists tell us, is a rational choice best explained by expectancy-value theory. Here, the worker assesses the perceived benefits and attendant costs, paying particular attention to
• The likelihood the union can achieve its stated goals,
• The reaction of significant others and
• The prospective rewards and/or penalties of joining.
At some point in the process, the prospective member must reconcile the certainty of the individual risk with the uncertainty of the collective action (1) succeeding and (2) everyone involved sharing equally in that success. Social psychologists call this the dilemma of collective behavior, and the likelihood of non-joiners benefiting from the group's actions the free-rider problem. These very real drawbacks are overcome psychologically only when someone is convinced that
• Success depends on his or her participation,
• Others will participate in sufficiently large numbers, and
• Collective action will achieve the desired goal (Klandermans, 1984).
Other theories emphasize the emotional or social component. Unaddressed dissatisfaction with wages, working conditions, treatment by supervisors and management, even the work itself breeds worker resentment and an 'us-versus-them' mentality. This is the core-proposition of frustration-aggression theory. Successful union organizing signals the incomplete integration of the worker into the company, a system in disequilibrium attempting to right itself. A willingness to strike grows as individual and collective workers' frustration levels increase. Typically, however, a strike occurs only when workers consider their goals and the union's in close alignment. Worker non-participation and membership defections suggest that these interests can and do diverge. Frustration-aggression theory thus also explains dysfunctional unions. Research suggests that most workers consider the cost and benefits of acting out their frustration. Alternatively, interactionist theory looks beyond the workplace for explanations of union participation and finds it in primary social groups — family, friends, and neighbors. They after all, are most intimately involved in shaping our values and beliefs; no other organization or institution — state, church, union — exerts as much direct influence. Successful unions aspire to assume such a role in their members' lives. i.e., become one of their primary groups. The era of the company town is largely gone; most workers no longer live together in tight-knit communities (Klandermans, 1986).
Further Insights
The Historical Context
The first to organize were the highly skilled workmen in craft unions. Ironworkers belonged to one, machinists to another, bricklayers to yet another and so on each according to his trade. Each craft union represented the interests of all its members working in different industries and regions. And because it deliberately limited membership by licensing only graduates of its own apprentice programs, each craft union exerted monopoly-like powers in its dealings with employers. The same principle underlay the success of the artisan guild of medieval times. Here, all the local craftsmen in a given trade — weaving, masonry, metal-working, baking, soap-making, etc. — agreed upon the prices they'd charge, admonished colleagues producing inferior goods, accredited the apprentices and journeymen who would one day join their ranks, and spoke as one voice on the municipal affairs. Guild members differed from nineteenth-century union-craftsmen in one crucial respect: they owned the ateliers in which they worked.
Industrialization turned the workshop into the factory floor and the proprietor-craftsman into a wage-earning employee. The investor who financed the increasingly mechanized equipment these modern-day artisans used valued cost-cutting over a generous wage. Nor could they, in all fairness, realistically do otherwise given the ruthless competition of the laissez-faire capitalism of the day. Business believed it had to answer for its actions to no one but itself. Surveying the dismal living conditions the early industrial worker endured as a result, social reformers and labor activists believed otherwise, but theirs was very much a minority view. The machines that powered the industrial revolution had to be built and maintained by skilled workers. They too were in the minority but, unlike others, literally held the power to slow or shut down the production line (Haydu, 1989).
Their influence waned in the twentieth century as mass industrialization gathered pace. Complex production processes increasingly were broken down into a series of simple tasks more readily done by machines that almost anyone with a modicum of training could attend to. And so the era of the assembly line and the consequent 'deskilling' of the workforce in the late nineteenth century undercut the craft unions' strongest bargaining chip. A ready supply of untrained workers meant employers could hire and fire virtually at will; a surplus meant they could keep wages low and factory conditions uncongenial (Fulcher & Scott, 2011).
To wrest back some measure of bargaining power from large industrialists would take nothing short of a mass movement, and a militant one at that. Workers could only turn to each other for aid and comfort; prevailing government policy and court rulings stood squarely in the capitalists' corner. Realizing this, unions became inclusive rather than exclusive in their outlook. If they could amass enough support to shut down production, they reasoned, employers would have to make concessions. The dense concentrations of unskilled labor immediately surrounding industrial sites proved a boon in this respect. If companies could successfully recruit there, so too could unions. In the era of vertically integrated monopolies, a single company extracted and shipped its raw materials to waiting processing and production plants and then delivered the finished goods to customers. To wield any influence unions had to enlist national support across...
(The entire section is 4272 words.)
|
Computing distance using lat/long coordinates
Fill in the latitude and longitude of two geographic locations and compute the distance between the points. The distance returned is the arc length of a great circle connecting the two points, essentially air miles. (Details)
Latitude °N Longitude °E
First location:
Second location:
Distance ...
Note that latitude and longitude are in degrees. Latitude is in degrees north of the equator; southern latitudes are negative degrees north. Longitude is in degrees east of the prime meridian; degrees west are negative degrees east.
You can look up your latitude and longitude here.
John D. Cook photo
|
How do you write a standard operating procedure?
Quick Answer
To write a standard operating procedure, or SOP, outline all the tasks to be included in the document, write an introduction, describe how to achieve each task, fix any mistakes and write the final draft. Making copies of the SOP and distributing them to potential users complete this process.
Continue Reading
Full Answer
The outline helps you remember every task you intend to include in the SOP. Write a list of all the tasks you plan to accomplish in the outline. Once you have outlined all the tasks, introduce the SOP to its potential users. In the introduction, be sure to address the users directly and describe the content of the SOP briefly to them. Mention the benefits of using the SOP, and advise them of the best way to apply the SOP.
Once the introduction is in place, list and discuss each task one after the other in the order in which they appear in the outline. Under each task, list and describe in detail the steps that should be taken to accomplish the task, taking care to ensure clarity and conciseness. To fix the mistakes, allow a person such as an employee to read the completed SOP, use his feedback to correct erroneous areas of the SOP, and then write the final draft.
Learn more about Academic Essays
Related Questions
|
Thursday, July 30, 2009
Flat Universe - Another silly result from relativity
I was reading an article in the NewScientist by Pedro Ferreira
"Dark energy may be bending the universe out of shape"
"WE LIVE in a special time. For the past two decades, most of my colleagues and I have been working under the assumption that we can know everything about the universe. We know the amount of matter and energy it contains. We know its shape is flat. We can trace its history from the earliest moments after the big bang and we can even predict its fate. Or at least we thought we could."
The article continues, "Together with later measurements from NASA's WMAP satellite, the results nailed down the geometry of the universe to within a few per cent."
And, "But it was only after a few years of whittling away at the signals in early 2000 that we saw our own clear evidence of hot and cold spots with a typical size of 1 degree."
It is absolutely ridiculous to state the universe is flat. There are Gamma Ray Bursts occurring all over the celestial sphere. Many of them have a red shift that would put them at >10 billion light years away. These are enormous explosion that create GRBs. These GRBs are probably galactic clusters exploding. If the universe was flat then these GRBs would be in the universe's elliptic or that 1 degree. This is not occurring.
In this Standard Vibration Model, Dark Energy is the aether that fills the universe. 100% of the universe is Dark Energy. Dark Energy is the medium in which all vibrations occur. Only 4% of the known universe is Baryonic in nature and the other 96% is Dark Matter.
|
Jamie M. Joseph, Ph.D.
Weston Cognitive Behavior Therapy & Evaluation
Helping you navigate the path of your life...
Eating Disorders
Dr. Joseph has extensive experience working with individuals with eating disorders and provides a safe and confidential environment where they can address eating and body image issues. Dr. Joseph is a specialist in Cognitive Behavioral Therapy (CBT) which has been shown to be highly effective in helping those who are struggling with eating disorders and assists them in decreasing destructive behaviors and emotions, and replacing them with more productive, healthy and helpful behaviors and emotions.
An eating disorder is characterized by severe disturbances in eating behavior. Presently, the professional community recognizes and names several different types of eating disorders.
1) Anorexia Nervosa: Individuals with anorexia nervosa have a distorted body image that causes them to see themselves as overweight even when they are not. They may even be extremely thin and they will still see themselves as overweight. This is called a ‘distorted body image’ and it is one of the central features of anorexia nervosa. These individuals have an intense fear of gaining weight and becoming ‘fat.’
Refusing to eat (self-induced starvation), exercising compulsively, and developing habits such as refusing to eat in front of others are, at times, characteristic of this disorder. Despite refusing to eat, these individuals often do not lose their appetite. They spend tremendous amounts of time and energy attempting to control their weight, shape and size through their food intake and behavior. People with anorexia lose large amounts of weight and may even starve to death, or die of other medically related complications. Anorexia has the highest mortality rate among all of the psychological disorders.
2) Bulimia Nervosa: Individuals with bulimia nervosa eat excessive amounts of food (binge) and then rid (purge) themselves of the food and calories consumed using various methods such as laxatives, enemas, diuretics, vomiting, fasting or exercising. These methods aimed at eliminating the calories and food consumed during the binge are called ‘compensatory behaviors.’ The person feels out of control during the binge and it may occur in secrecy. Feelings associated with the binge include disgust, shame, and fear. The purging behaviors serve to reduce these negative emotions. Disturbance of the perception of one’s size and shape are also a central feature of bulimia nervosa.
3) Binge Eating Disorder: Individuals with a binge eating disorder experience frequent episodes binging, which is eating excessive amounts of food. However, the major difference with binge eating disorder from bulimia nervosa is that binge eaters don't purge their bodies of excess food and calories.
4) Eating Disorders Not Otherwise Specified: Individuals that fall into the diagnostic category of an eating disorder not otherwise specified have eating-related issues, but they don't meet the official criteria for anorexia, bulimia or binge eating.
|
Also found in: Dictionary, Thesaurus, Encyclopedia, Wikipedia.
Related to specula: specular, speculum
A plural of speculum.
(spek'yu-lum) (-la) plural.specula [L. speculum, a mirror]
Enlarge picture
1. An instrument for examination of canals or hollow organs. See: illustration
2. The membrane separating the anterior cornua of lateral ventricles of the brain. Synonym: septum pellucidum
bivalve speculum
A speculum with two opposed blades that can be separated or closed.
See: vaginal speculum
duck-bill speculum
A bivalve speculum with wide blades, used to inspect the vagina and cervix.
ear speculum
A short, funnel-shaped tube, tubular or bivalve (the former being preferable), used to examine the external auditory canal and eardrum.
eye speculum
A device for separating the eyelids. Plated steel wire, plain, Luer's, Von Graefe's, and Steven's are the most common types.
Pedersen speculum
A small vaginal speculum for examining prepubertal patients or others with small vaginal orifices.
Enlarge picture
vaginal speculum
A speculum, usually with two opposing portions that, after being inserted, can be pushed apart for examining the vagina and cervix. It should be warmed before use.
See: illustration
References in periodicals archive ?
My office staff no longer have to scrub, soak and autoclave metal specula between exams or position fiber optic lights in preparation for patient procedures.
The investigation found that re-usable metal vaginal specula were not always sterilised before being used again.
KleenSpec Disposable Vaginal Specula are designed to fit directly onto the Welch Allyn Cordless Illumination System--eliminating the worry of cords being cleaned, getting in the way or breaking during an exam.
Delivery specula and universal framework in the following amounts and types of (designation according to international standards ISO 9404-1 and ISO 7212) for different oslonnosci chambers:
However, 39 (48%) of those samples had been obtained with nonlubricated specula.
Dini called his bethren the specula mundorum, "the mirrors of the worldly.
Welch Allyn surgical headlights, ear specula and vaginal specula achieved the highest year-over-year market share growth for distributed products in the category.
This blade can be easily disassembled for cleaning, and, unlike traditional specula, does not obstruct the view of the cervix.
Illuminated instruments consist of retractors, specula, laryngoscopes, trocars and any other devices that use fiberoptics to pass light to the work site.
|
Medical Assistant Work
in Medical
Having a medical career comes with a certain amount of job security for many people. It is an industry that is constantly growing and will always be around no matter what the state of the economy is. You do not have to become a doctor or a nurse to have a rewarding job, there are actually many different types of medical field careers.
These jobs range from medical administrative assistant to x-ray technicians. One of the positions in the health care industry that is growing faster than many other industries is medical assistant careers.
So exactly what is a medical assistant? Well, medical assistant duties vary from administrative tasks to medical tasks such as taking blood. The medical assistant job description varies depending on the specific medical assistant job you may have.
To start a medical assistant career you may consider joining a medical assistant program. These programs will give you the training and experience you need to become a certified medical assistant.
There are so many different types of health care facilities that require medical assistants. Medical assistants are cheaper than doctors and nurses and in this economy they are starting to be in much higher demand. So quality candidates should not have a hard time finding medical assistant employment.
You can actually verify this information on the governments department of labor statistics website. They show that this field is growing faster than average. The website also shows the salary of health care assistants. The details are pretty specific as you can see which type of offices pay what. Additionally you can tel which cities have the highest concentrations, etc.
I hope that you can see it is a myth that you must go to medical school to have a medical career. This is just simply not the case! However, if you do want to pursue other careers such as becoming a doctor or a nurse, being a medical assistant is a fantastic way to get valuable experience. This experience will help you understand all aspects of running a medical office as well as the patient experience.
You can tell when you walk into your doctor's office how many people it takes to make the office run smoothly. You have receptionists, insurance specialists, medical transcriptionists, billers and coders, lab technicians, etc. Hospitals, doctors offices, insurance companies, schools, and many other organizations hire medical professionals to help them run their offices.
So you can see that having a medical career could be very promising. It is recommended that you research the different types of industries. For example, would you like to work in a dentist's office or a plastic surgeon's office? You can adjust your training in a plan to find employment in the office type of your choice.
The people who get a lot of fulfillment out of their jobs in the medical field are those that enjoy helping people and are compassionate. This is because you will be interacting with patients. And as can be expecting, patients are typically sick and maybe anxious or in an otherwise vulnerable state of mind. It takes an entire staff in a doctor's office to ensure that the patient is calm and acceptable to the health care you are trying to provide.
Author Box
Michelle Louis has 1 articles online
Michelle can show you how to find medical assistant work and medical assistant employment
Add New Comment
Medical Assistant Work
Log in or Create Account to post a comment.
Security Code: Captcha Image Change Image
This article was published on 2010/03/28
|
PreventiNe Life Care
Trans Fat Index
Trans fat is an “artificial fat” which is there mostly in fried items we eat or some processed foods such as biscuits and cakes to give them a longer shelf life.
These are industrially produced fats which is formed when oil goes through a process called hydrogenation in order to become more solid. This type of hydrogenated fat is used for frying or as an ingredient in processed foods.
Trans fat is considered to be the worst type of fat we can eat. Unlike other dietary fats, trans fat has a double bad impact - On one hand it increases the bad cholesterol (LDL) and at the same time it lowers the good cholesterol (HDL).
Why measure your Trans Fat Index?
Trans fats in your blood come from trans fats (artificial / industrial fats) in food.
In today’s lifestyle, most people have high levels of trans fats in their blood.
High levels of trans fats are related to increased risk for heart disease.
Blood levels of trans fats can be reduced by simple dietary changes.
The best way to know your blood level of trans fats is by measuring it with the Trans Fat Test and take preventive steps mentioned in the report to lower its level.
OMEGTEST - The trans fat test
The OMEGTEST package include the Sample Collection kit, Sample return envelope and a results report (emailed / hard copy delivered) to you in 5 days from receipt of your sample in our laboratory.
Simple do-it-yourself test. Needs a self-collectable “dried blood spot”, negligible pain and safe.
The report will display the TRANS FAT INDEX and hence the health risk posed to you because of Trans Fats.
The report shall also give useful information about how to manage the levels.
Should you need to consult an expert to understand the road-ahead, you may book a call with us. A 15 minute consultation by an MD shall be provided for no extra cost to you.
What are trans fats?
Trans fats are unsaturated fats (i.e., fats with one or more double bonds) in which at least one of the double bonds is in the trans configuration, instead of the more natural cis configuration. Trans fats can occur naturally at fairly low levels in some meat and milk products, but most of the trans fats that people consume are industrially produced. That is, they are produced from liquid vegetable oils by the process of “hydrogenation”.
What are the key sources of Trans Fats?
Trans Fats are “artificial fats” and unlike naturally occuring fats, they are produced through industrial processes. The key sources are:
Baked food products:
Donuts, muffins, crackers, popcorn and many other processed “grain-based” foods tend to be high in trans fats.
Deep-fried fast foods:
Oils used to fry fast foods can be rich in trans fats, increasing the amount in the final product. Recycled oil is even worse with respect to trans fat content.
Meat and Dairy:
Meat and dairy contain naturally-occurring trans fats, which generally do not appear to have a negative affect our health. However, the evntual impact depends on quantity and quality of consumption.
Why are trans fats there in foods?
The Food industry began to produce trans fats as a replacement for butter. This is because butter was progressively being declared a health risk due to its high content of saturated fat. The industry therefore needed an alternative for their frying and baking needs. Adding hydrogen to unsaturated oils created a semi-solid, trans fat product that was shelf-stable and made stable baked goods and crispy fried foods. Unfortunately, trans fats turned out to be no less than butter with regards to heart disease risk. In fact several studies have now rated it as a worse content of food even compared to cholestrol.
Why are trans fats bad for heart?
Trans fats increase the risk of heart disease by the way of negative impact on cardiovascular factors which leads to an increased risk of heart diseases and strokes. Trans fats cause an increase in the bad cholesterol, a reduction in the good cholesterol, and ruins the Total cholesterol:HDL-cholesterol ratio.
In certain controlled studies, the high trans fat levels in red blood cells has been shown to be associated with a 47% increased risk for sudden cardiac arrests. Some studies also show an increased risk of diabetes in women who consumed more trans fats, though this is not as consistent as the heart disease data. It is estimated that eliminating trans fat from the food supply would avert between 6-19% of heart disease-related deaths per year, totaling up to 228,000 deaths !
This information is sourced from the article, “Trans Fatty Acids and Cardiovascular Disease,” published in The New England Journal of Medicine in 2006 by Dr. Dariush Mozaffarian et al. 2006.
How can you tell how much trans fat is in a particular food?
The vast majority of trans fats in the diets are found in packaged foods. The Nutrition Facts Panel on packaged foods lists the amount of trans fats per serving. If a serving of the food has less than 0.5 g of trans fat, then the manufacturer can list it as “0.” Non-packaged foods like bulk grains, cereals, candies; store-packaged meat; fresh fruits and vegetables do not have a Nutrition label and thus any trans fats in those foods will not be listed. The practice of disclosure is now fast evolving and soon in future it is likely to become mandatory.
To avail OmegTest
|
Now all roads lead to France and heavy is the tread
Of the living; but the dead returning lightly dance.
Edward Thomas, Roads
Monday, April 21, 2014
On the Importance of Bruchmüller
Colonel Bruchmüller
Jonathan M. House, Combined Arms Warfare in the 20th Century
1. Jonathan: Worth noting that Bruchmuller called his artillery technique, some what poetically, the Feuerwalze (Fire Waltz). An apt description, given how the fires shifted back and forth.
John Snow (
2. As I recall, the rapid German reclamation of much of the BEF successes at Cambrai was attributed to the first application of fast moving troops by-passing all significant strong pockets of enemy and supported by aircraft information and flexible artillery bombardment.
I might be a bit off on this so other comment would be appreciated
3. Also see David T. Zabecki's Steel Wind - Colonel George Bruchmueller and the Birth of Modern Artillery (paperback - Praeger, 1994).
HMS, NC
|
Glenburnie-Birchy Head-Shoal Brook
The communities of Birchy Head and Shoal Brook take their names from geographical features, while the community of Glenburnie is a tribute to the Scottish origins of the first settlers.
These communities are "enclave" communities in Gros Morne National Park. These communities are not in the park, but are closely tied to the development of park facilities and tourism.
The community of Glenburnie was not settled until the late 1880s. The first settler is believed to be Hugh McKenzie, who came to cut wood, but realized the potential for farming in the area. The early settlers of Shoal Brook came to fish in the area. Birchy Head was not claimed by any fishermen because of the areas steep cliffs.
Most of the residents of the area were involved in farming and fishing.
As time exceeded, people moved on to other places, such as Corner Brook and Stephenville. Most of the once farmland has now gone to hay or pasture land.
The communities were incorporated as one town in 1978. The population in 1991 was 365 people.
|
Wicca and Ecofeminism
Only available on StudyMode
• Download(s) : 121
• Published : April 25, 2013
Open Document
Text Preview
Across many cultures that function predominately with patriarchal thought, women are perceived to be closer to nature than men (Roach, 2003; King, 1989, 2003). This perception of women and nature portrays them as the ‘others’ – something that is different from and controlled by the dominant (King, 2003). The binary oppositions of male over female and culture over nature have been associated with more male-dominating religions like Christianity (Roach, 2003; Ruether, 2003) Goddess religions and earth based spiritualties on the other hand find power in the female image, connect with nature through rituals, and believe it is the destiny of humanity to participate in the cycles of birth, death, and renewal that characterizes life on earth. The Goddess and Mother Nature inspire individuals to repair the split between men and women, between man and nature, and God and the world. Ecofeminism, a type of feminist critique, uncovers the source of environmental deprivation in the structure of dualist thinking and patriarchal systems (King, 2003). Some Eco-feminists associate the feminine principle with the giving and nurturing of life, as valued in goddess religions and earth based spiritualties. By contrast, they see patriarchal culture as rising from a fear of death, which ultimately creates a culture of domination over nature (King, 2003). This essay will explore the nature based religion of Wicca and how it may influence feminist and ecological critiques. Ecofeminism will then be used to analyze society’s ecological views and determine if a possible shift towards a more caring and sustainable approach can be achieved through gender equality in religious practices.
Anthropologists have come to believe that the original human religion during the Stone Age Culture reflected a matriarchal society. The Mother Goddess was the one in power, and her son was the hunter. This religion was eventually conquered by patriarchal nomadic warriors, replacing the Mother Goddess with a male figure and reducing her to a mistress, wife, or daughter (Mellor, 2003). From the eighteenth century BC to the seventh century AD, the patriarchal reform religions like Christianity suppressed the goddess altogether and continued to worship the male, sky god (Mellor, 2003; King 1989). This patriarchal structure of the Christian religion with a single male in power creates an imbalance in gender equality and further constructs a hierarchy that puts women below men. The gender inequalities present in Christian beliefs begin with the biblical story in Genesis of the Garden of Eden. God, the highest power in male form, watches over Adam and Eve in the Garden. Eve, the woman figure, is seen as subordinate to Adam and she later becomes the cause for the fall from the Garden (Merchant, 2003). This biblical story creates a patriarchal heritage and further puts women at the devastation of humanity (Mellor, 2003). Other biblical stories in Genesis 1 of Christian writing also view nature as destructive and harmful to mankind, similar to Eve’s threat to civilization (Harrison, 1999). Although Christianity suppressed the goddess religions for many centuries, feminist and natured based spiritualties later re-emerged in North America when other feminist movements were taking place (Fry, 2000). A major growth spurt of feminists looking for alternatives to the patriarchal mainstream religions like Christianity was seen in the 1970s. An increasing number of women, as well as men, began exploring feminist approaches to spirituality. They sought to identify the world as a living being, learned to celebrate that vision in ritual, acted out of that vision to preserve the life of the earth, and built community around it (Starhawk, 1989). Wicca, also referred to as the Old Religion, Witchcraft, or Wisecraft, is a variety of Paganism that emerged in the United States in the 1960s and caught the interest of feminists during this time (Harvey, 1997). It is a religious practice based...
tracking img
|
• Bible Films Blog
Wednesday, July 27, 2005
Golgotha (Ecce Homo, 1935)
During the Hundred Years War, the English soldiers, confronted with a French pope coined the (not-technically-accurate) chant "The Pope is French, but Jesus is English".1 Those who rallied to that particular call would no doubt have been horrified to find that the first words Jesus spoke from the silver screen were, in fact, in French.
Eight years after Cecil B. DeMille’s definitive silent film about the life of Christ, The King of Kings, Julien Duvivier brought Jesus back to cinema screens. The difference between the two films, however, is far greater than mere language. The King of Kings typifies the stagey pseudo-piety that has typified most American cinematic Christs, whereas Golgotha like Pasolini’s more widely known Il Vangelo Secondo Matteo (Gospel According to Matthew) captures something deeper, mysterious and more spiritual with its simpler feel.
That is not to say that Golgotha has not been done a grand scale. The opening scenes of Jesus’s triumphal entry into Jerusalem are as vast as anything Hollywood has had to offer us; but the scene also typifies the difference. Jesus is almost entirely absent from it. Yet, even without subtitles or a knowledge of French it is clear what is happening. Duvivier teases the audience showing the hustle and bustle of the crowd, the Pharisee’s discussing what has been going on, the action at a distance, and even a shot of the crowd from Jesus’s point of view as he passes through, but delaying showing us Christ himself.2 It is an fascinating device, drawing the audience into the story, and making them part of a crowd that is straining to see Jesus.3
When Jesus (played by Robert Le Vigan) finally does appear, over ten minutes into the film, it is at a distance, and shot from a low angle. He is almost obscured by his disciples, and there is a moment of confusion as to whether this is really he. The effect is to give the viewer the impression of actually being there, and discovering Jesus for the first time. Caught in the crowd, nudging ineffectively towards the action to catch a glimpse of the man everyone is chattering about. Eventually you can make out his distant figure moments before he disappears through the temple doors.
Inside the temple Duvivier delivers the finest sequence in the entire film, and one of the most memorable scenes in any Jesus film to date, as Jesus drives out the money-changers. The sequence starts with several, quick, shots intercut in a way reminiscent of Hitchcock’s legendary shower scene in the later Psycho. The first shot prefigures the action to come as coins swept off an off-screen table crash onto the floor and scatter. It is quickly followed by swift series of action and reaction shots. The sequence culminates in a single long take, over 30 seconds long which is the most impressive of them all. The camera tracks through the palisades of the temple in Jesus’s wake, straining to catch up with him as he zigzags from stall to stall. However, the shot isn’t focussed on filming Jesus so much as capturing the moment. In fact, as the camera weaves its way around, Jesus is only occasionally in shot. The result of this shot is that it captures the action, and chaos of the incident, in a way that no other Jesus film, either before or after, has quite managed. Considering this scene was created 6 years before Wells supposedly revolutionised camerawork with Citizen Kane, it is all the more remarkable. It also, albeit unintentionally, created documentary style footage, years before the documentary genre would be invented.
Like Jesus Christ Superstar, and to a greater extent the most recent Jesus film - Mel Gibson’s The Passion of the Christ - Golgotha returns to the roots of the Jesus film genre and focuses on the immediate events leading up to Jesus’s death. Hence the majority of the dialogue focuses on the political machinations both within the Sanhedrin, and between the Jewish leaders and Pilate. The centrepiece of the film is arguably the conversation between Pilate (played by French star Jean Gabin) and Jesus, culminating in the former declaring "Ecce Homo" (behold the man), which was actually the original title for the film.4
What is surprising is that despite this being the first Jesus film with sound, Duvivier focuses on these conversations, many of them fictional, and ignores nearly all of Jesus’s teaching. There are three main exceptions however. The first is at the culmination of the cleansing of the temple scene where Jesus offers his usual synoptic epitaph to the baying crowd. The action moves back to the disapproving Pharisees and follows their discussion, before cutting back to Jesus in mid-flow. We hear the well known dictum "render to Caesar the things that are Caesar's, and to God the things that are God's." (Mark 12:17 and parallels) but miss the first part of the confrontation. This is arguably the most interesting of the three pieces of "teaching" that are encountered as the third piece, Jesus words during the Last Supper, is rendered fairly unimaginatively.
What is curious about the tax question scene is the way it pre-supposes audience acquaintance with the story, and then uses that to give the impression of real time. More importantly, it also illustrates that which we are told is happening – that the question is being set as a trap under the watchful eye of the squabbling Jewish Leaders. It also causes the viewer to interact with what is presented, and fill in the gaps in a way that few Jesus films do – stimulating the imagination, rather than laying it all on a plate for a passive audience.
Instead of these grand spectacles Duvivier again presents three beautifully understated events, but invests them with a deep sense of transcendence. Incredibly, the first does not occur right up until Jesus’s arrest. Even then Duvivier shuns the more crowd pleasing healing of Malchus’s ear in favour of the obscure words of John 18:6. As Jesus identifies himself as the man the soldiers seek he simply says "I am he". John then records that as he did so the soldiers "drew back and fell to the ground" (RSV). As far as I am aware, in over 100 years of films about the life of Christ, no other film has shown this incident. Even the recent word for word version of The Gospel of John (2003), inexplicably left the soldiers standing despite the narrator reading out those very words. By contrast Duvivier shows a range of responses, with some soldiers falling, and others remaining upright, but he films it so astonishingly that it somehow captures the truly phenomenal nature of such an event.
Once he has been arrested Jesus is as usual passed from pillar to post, in a fashion that is broadly similar to many of the other depictions of Jesus’s death. There are however a number of places where the way Golgotha has been filmed really stands out. In particular, with The Passion of the Christ still on the cultural horizon there are a number of places where the comparison between it and Golgotha are especially interesting.
One of the flaws with The Passion of the Christ was that it failed to round out the Roman soldiers who sadistically inflicted so much suffering during the films two hours. Despite a shorter run time, Golgothaimparts the relevant scenes with a far greater degree of realism than The Passion, capturing, as it does, the sadism, but also the underlying insecurity, that drives such bullying. Harry Baur’s Herod typifies the approach. Herod’s ruthless mocking is interspersed by subtler indications that he is desperately trying to gain the approval of his all-too-pliant courtiers.
Duvivier also uses these scenes to commentate on the very real political events of that time. As the soldiers beat and ridicule Christ one of them mockingly salutes him with his arm fully aloft in a manner clearly reminiscent of the fascist and Nazi salutes. Golgotha was released in 1935 during the rise of Nazism, (the very month in fact that the notorious Nuremberg Laws were enacted). The following year the world would fawningly ignore the regime’s explicit racism and attend the Olympic games it staged as a monument to it’s own self-importance. Given all this then, such a salute could not have failed to go unnoticed and as such it offered a powerful critique of the Nazi movement. Historically speaking, the beating of Jesus has been a universally condemned act, even though Jesus’s Jewishness has largely been toned down to allow that to happen. The film plays on this by comparing the condemned Romans with the celebrated Nazis; beating and bullying Jesus, a Jewish man. Such interplay exposing the hypocrisy of the tacit approval of anti-Semitism which would continue for several years unchecked along the road to the holocaust. Too often Bible films have pandered to political ideologies. (DeMille’s pseudo-midrashic reworking of The Ten Commandments into a story that supported the US stance in the Cold War being only one example among many). Golgotha on the other hand (dangerously) challenges an ideology in such a way that it embodies the risky and prophetic spirit of its central character.
The film makes the effort, however, to show a range of reactions to Jesus’s torture. Whilst many in the crowd stand by to enjoy watching him whipped, it causes one onlooker to faint. Again, there is an interesting comparison to The Passion here. Golgotha shows the horror of it through the reaction of someone we can sympathise with, rather than the more "in your face" approach of Mel Gibson. Duvivier also shows the mixed reaction of the people to Jesus on the via dolorosa. Hassled both by the children throwing small stones at him and the sick who press for healing even as he stumbles towards the cross. Once there he is crucified, dies and is buried. The viewer is shown the sealing of the tomb from the inside, thus ending the main segment of the passion story as the film began – from Jesus’s point of view5
Over the years, the resurrection has proved to be one of the most difficult scenes for filmmakers to portray, with most of the literal depictions sliding into kitsch. As a result some filmmakers have opted either to portray it more cinematically (such as Martin Scorsese’s Last Temptation of Christ),or to replace the biblical episodes with extrabiblical scenes (The Passion of the Christ), or to leave it out completely (From the Manger to the Cross).
As with the earlier scene in Gethsemane, Duvivier manages to get it just right, skilfully combining the early accounts in Luke (the woman at the tomb, and the road to Emmaus) with the later events in John (appearance amongst the disciples, Thomas, and Peter’s restoration). In so doing, the viewer is given a unique position, having neither seen the risen Jesus like the majority of the disciples, but being too familiar with the story to be in any doubt about the truth behind their testimony (from a narrative point of view at least). Yet there is also something special about the first appearance of the risen Jesus as he materialises in the middle of the upper room. It is simple and effective, yet it also manages to capture the otherness of it.
It is a fitting end to the work, embodying as it does, the way Duvivier combines the ordinary with the extra-ordinary throughout the film. By downplaying the moments where many other Jesus films have opted to turn up the spectacle, he has invested them with a believability which touches the reality of the world in which we live.
1 - This line was in fact incorporated into the 2001 film A Knight’s Tale, which is set in the time of the crusades, although the comment is relegated to pre-joust banter.
2 This appears to be the first time that this device would be used to give Jesus’s point of view. Although it could be argued that The Greatest Commandment (1941) also uses this technique, the point of view shot in a film about Jesus (and it’s accompanying encouragement to see things as Jesus did) would not re-surface until much later – See Peter T. Chattaway 'Come and See: How Movies Encourage Us to Look at (and with) Jesus' inRe-Viewing The Passion: Mel Gibson's Film and Its Critics S. Brent Plate (Editor), Palgrave, Macmillan, 2004
3 Incidentally DeMille also delays showing us Jesus, and he makes us wait considerably longer. The similarities, however, are fairly superficial.
4 In fact, it appears that the film’s original release title was Ecce Homo, only being changed to Golgotha after it’s American release.
5 This also compares interestingly with The Passion of the Christ where we are seen the tomb opening from inside the tomb.
6 Unfortunately, some overly literal viewers have failed to grasp what Scorsese sort to do with his ending, and have accused him of leaving the resurrection out altogether.
|
Change Psychology: The Acquiescence Effect
When asked a question by another person, our answer is based not just on a rational consideration of what is being asked. In particular, our identity needs lead us to consider how we will appear to others.
We thus will tend to answer more in the positive rather than the negative, particularly if a leading question is used. We seek to acquiesce to the needs and direction of others, particularly when:
• they seem to be a superior in some way
• they have a need whereby we can easily help them
• answering the question fully seems like hard work
People thus tend to agree with one-sided statements. They will also agree with two contradictory statements when they are framed for agreement.
If you were asked ‘Do you think the government makes mistakes?’, you may well say yes. If you were asked whether the government generally gets it right, you may also agree.
Lawyers will ask complex questions of people in the witness box, who may give in and agree rather try to unravel what is being asked of them. A butcher asks a customers ‘Do you want the best cut?’. The customers agrees.
Using it
1. Use leading questions to get people to agree with you.
2. Use neutral questions if you want a more honest response.
Before you answer a question, consider the bias in the question and also the bias in your head. Don’t say ‘yes’ just to make others happy.
Psychology of Change (Picture source:
Original article taken from
One thought on “Change Psychology: The Acquiescence Effect
Leave a Reply
You are commenting using your account. Log Out / Change )
Twitter picture
Facebook photo
Google+ photo
Connecting to %s
|
packet switching
Read Also:
• Packet-switching
[pak-it-swich-ing] /ˈpæk ɪtˌswɪtʃ ɪŋ/ noun 1. a method of efficient data transmission whereby the initial message is broken into relatively small units, or , that are routed independently and subsequently reassembled. noun 1. (computing) the concentration of data into units that are allocated an address prior to transmission noun a method of data transfer in […]
• Packet switch node
(PSN) A dedicated computer whose purpose is to accept, route and forward packets in a packet-switched network. (1994-11-30)
• Packet writing
storage A technique for writing CD-Rs and CD-RWs that is more efficient in both disk space used and the time it takes to write the CD. Adaptec’s DirectCD is a packet writing recorder for Windows 95 and Windows NT that uses the UDF version 1.5 file system. [Is this true? How does it work?] (1999-09-01)
• Packframe
[pak-freym] /ˈpækˌfreɪm/ noun 1. a framework, usually of lightweight metal tubing, that supports a backpack on the wearer, often by curved extensions that fit over the shoulders. /ˈpækˌfreɪm/ noun 1. (mountaineering) a light metal frame with shoulder straps, used for carrying heavy or awkward loads
|
Monday, 4 August 2014
CHAPTER 12- Integration by Parts
This section will introduce you to a very powerful concept in integration. Using integration by parts, we can (theoretically) calculate the integral of the product of any two arbitrary functions. You should be very thorough with the use of this technique, since it will be extensively required in solving integration problems.
Let f(x) and G(x)be two arbitrary functions. We need to evaluate \int {f(x)g(x)\,\,dx} . The rule for integration by parts says that :
\int {f(x)g(x)dx = f(x)\int {g(x)dx - \int {\left\{ {f'(x)\int {g(x)dx} } \right\}} } } dx
f'(x) represents the derivative of f(x)}
Translated into words (which makes it easier to remember!), this rule says that:
The integral of the product of two functions = (First func) \times (Integral of second func) - integral of {differential of first func \times integral of second func}
Theoretically, we can choose any of the two functions in the product as the first function and the other as the second function. However, a little observation of the expression above will show you that since we need to deal with the integral of the second function (\int {g(x)dx} , above) we should choose the second function in such a way so that it is easier to integrate; consequently, the first function should be the one that is more difficult to integrate out of the two functions. We can thus define a priority list pertaining to the choice of the first function, corresponding to the degree of difficulty in integration :
The boxed letters should make it clear to you why this rule of thumb for the selection of the first function is referred to as the ILATE rule.
It is important to realise that the ILATE rule is just a guide that serves to facilitate the process of integration by parts; it is not a rule that always has to be followed; you can choose your first function contrary to the ILATE rule also if you wish to (and if you are able to integrate successfully with your choice). However, the ILATE rule works in most of the cases and is therefore widely used.
The integration by parts rule can be applied to the integral of a single function also, taking unity as the second function :
\int {f(x)dx = \int {\underbrace{f(x)}_{\rm{Comment:1}}.\underbrace{1}_{\rm{Comment:2}}dx} }
Comment:1 first func
Comment:2 second func
\,\,\,\, = f(x)\,\,x - \int {f'(x) \cdot x\,\,dx}
For examples,
\int {\ln x\,\,dx = \int {(\ln x) \cdot 1\,\,dx} }
= (\ln x) \cdot x - \int {\dfrac{1}{x} \cdot x\,\,dx}
= (\ln x) \cdot x - \int {dx}
= x\ln x - x + C
Before moving on to the examples, let us go through the justification of the integration by parts rule. Consider two functions u(x)\, and v(x):
\dfrac{d}{{dx}}(u(x) \cdot v(x)) = u(x) \cdot \dfrac{{d(v(x))}}{{dx}} + \dfrac{{d(u(x))}}{{dx}} \cdot v(x)
\Rightarrow \,\,\,\,d(u(x)v(x)) = u(x)d(v(x)) + d(u(x))v(x)
\Rightarrow\,\,\,\, u(x)d(v(x)) = d(u(x)v(x)) - d(u(x))v(x)
Integration both sides, we obtain
\int {u(x)d(v(x)) = u(x)v(x) - \int {d(u(x))v(x)} } \ldots(1)
Let u(x) = f(x) so that d(u(x)) = f'(x)dx and v'(x) = \dfrac{{d(v(x))}}{{dx}} = g(x) so that d(v(x)) = g(x)dx and v(x) = \int {g(x)dx} . Substituting these values in (1), we obtain
\int {f(x)g(x)dx = f(x)\int {g(x)dx - \int {\left( {f'(x)\int {g(x)dx} } \right)dx} } }
This is the expression we stated earlier. Thus, this rule is simply obtainable from the product rule of differentiation.
Post a Comment
|
Land (Economic) Transformation/Empowerment
Land transformation, like skills development, affirmative procurement, etc, is one way of economically empowering the people. Land, like access to finance, still remains very much inaccessible for the majority of Namibians. Note, (not I, but) the Agricultural (Commercial) Land Reform Act 6 of 1995 states in its preamble: “… to provide for the acquisition of agricultural land by the State for the purposes of land reform and for the allocation of such land to Namibian citizens who do not own or otherwise have the use of any or of adequate agricultural land, and foremost to those Namibian citizens who have been socially, economically or educationally disadvantaged by past discriminatory laws or practices.” It is common knowledge that the government is being frustrated in its land transformation process for reasons such as (see: Minister’s remarks in the PTT on Land Reform): – the “unavailability” of land (land owners not coming to the assistance of government in making land transformation work); – if land does become available, the cost is a factor. What avenues are there for the government to acquire land? In terms of the applicable law, government can acquire land through: – the exercise of its pre-emptive right to buy land in the event of a willing seller-willing buyer transaction; – the compulsory acquisition of land by means of expropriation. In both these instances, provision is made in the relevant legislations dealing with the acquisition of land, for reasonable and fair compensation in exchange for land. The above two options are the only ones available to the government. It is, of course, a fact that, during the colonial periods, many indigenous Namibians lost their land when the land was forcefully and illegally taken from them. We, however, do not have a legal provision for the restitution of land rights which would entitle those who lost their land to reclaim these rights. This would have been an option available to government in the process of land transformation – something for government to consider. Our land is the primary source of empowering the people and to meaningfully make an impact on economic transformation. For how long will we as Namibians look on how foreigners are trading with this primary and scarce commodity to our detriment and that of our children? Last year, we read in the newspapers of two South African brothers who own 70ÃÆ’Æ‘ÀÃ…ÃÆ”šÃ‚ 000 hectares of prime Namibian land (natural resource) boasting about how they would develop that prime piece of land into the biggest game lodge in (Namibia) Africa, if not in the whole world, to the tune of billions of Namibian dollars. Barely 12 months later we read that this piece of land might be sold to a Russian oil magnate to the tune of N$100 million. Why is the wealth of the country and the economic prosperity of the people locked up in this land? Why can the owners of this land not use the riches they accumulated from this Namibian resource to plough it back for the benefit of themselves, Namibia and its people? The owners of this land now have the chance to prove to the nation that they will unlock the potential that they talk about and which is locked up in the land, and develop it to its full potential for the good of all. Have you imagined yourself, a black Namibian, going to Russia and applying for a diamond concession, and if you fail you take the government to court, to Spain and apply for a fishing concession, to South Africa, Germany, France or Austria and acquire a farm of 70ÃÆ’Æ‘ÀÃ…ÃÆ”šÃ‚ 000 hectares? Do not even try. What do some people think of the Namibian people, the authorities and the laws of the country? Do they not think our government also needs the land to empower its own people? It is so obvious from such intentions that they do not know what to do with the land. It is understandable – the land is too big for so few people. It can be more useful for about 35 families who so desperately need land, and this land would so perfectly fit in with our national land transformation plans. Just think what the Namibian people could do with these 70ÃÆ’Æ‘ÀÃ…ÃÆ”šÃ‚ 000 hectares with proper education, planning and access to finance – establish the largest community-based tourist management project, commercial farming; game farming; mineral exploitation, etc., etc. This is a once-off opportunity for the government to acquire about 26 farms at one go in line with the government’s land transformation policy and the laws of the land. These farms could all be located in one area, making control and supervision so much easier and less costly. If I were to be the adviser of the Minister of Lands and Resettlement, I would have advised the Minister, in the national interest, to acquire this land in compliance with the laws of the land. D. CONRADIE
|
Amarillo Diocese offers CPR class
Cardiac arrest, more commonly known as a heart attack, is one of the leading causes of death in the United States. The American Heart Association reports about 359,400 Americans have a heart attack in any given year, and the survival rate is generally less than ten percent.
However, if CPR (cardiopulmonary resuscitation) is performed within 6 minutes of cardiac arrest, the victimâ??s chances of survival can double or even triple.
And according to the AHA, 88 percent of cardiac arrests occur in the home - which means if you know CPR, youâ??re most likely to use it to save a loved one.
So the Roman Catholic Diocese of Amarillo held a free training session for anyone interested in learning the life-saving technique.
â??I believe that every household â?? one member of the household, at least â?? should know first aid and CPR,â?? said Oscar Guzman of the diocese, â??because if you have children or even just your neighbors â?? anywhere you are, there are people around you. And if somebody knows, you could be the lifesaver of that family or even people you donâ??t know.â??
If youâ??d like to learn more about CPR or the Roman Catholic Diocese of Amarillo, follow the links attached to this story.
|
Social Icons
Featured Posts
Friday, January 9, 2015
What Is Web Design?
Most discussions of Web design get off track in short order, because what people mean by the expression varies so dramatically. While everyone has some sense of what Web design is, few seem able to define it exactly. Certain components, such as graphic design or programming, are a part of any discussion, but their importance in the construction of sites varies from person to person and from site to site. Some consider the creation and organization of content—or, more formally, the information architecture—as the most important aspect of Web design. Other factors—ease of use, the value and function of the site within an organization's overall operations, and site delivery, among many others—remain firmly within the realm of Web design. With influences from library science, graphic design, programming, networking, user interface design, usability, and a variety of other sources, Web design is truly a multidisciplinary field.
What is HTML?
Getting Started
Before we get down to business we should point out that there are two very different ways to make a website.
1. The quickest and easiest way to make a site is to use an on-line "wizard" supplied by your internet service provider (ISP) or some other organization.
To use this method, visit the internet address given to you by the organization providing the service. There you will be guided through a series of simple steps which will result in a site being constructed for you. The advantage of this method is that you don't need any skills other than using your browser. The drawback is that you are very limited in what you can do with this kind of website.
2. The other approach is to construct a website on your own computer, then "upload" it to the internet so that other people can access it. This is the way most serious sites are made, and it's the method that this tutorial will cover.
Note: As the internet is such a complicated environment, these introductory tutorials tend to over-simplify explanations of how things work. You shouldn't take all our examples and illustrations too literally, but the information is conceptually sound. In time, you can choose to make the effort and build up a more technically accurate understanding.
Introduction to Web Design
Introduction to Web Design
This tutorial is suitable for beginners in the field of web design. It includes:
1. Introduction - You are here.
2. HTML - An introduction to the computer language which forms the heart of web pages. Although it's not absolutely necessary to know this stuff, you should still read this page to get an idea of how it works.
3. Editors - Tools you can use to help create websites.
4. Hosting - How to find a home (host server) for your website.
5. Publish! - How to upload your site to the internet so that other people can visit it.
Important Note: Before taking this tutorial or attempting to build a website, you must have a basic understanding of the infrastructure which makes up the internet. You need to know what a server is, how websites exist and how people access them. If you don't understand these things you should not begin constructing your site! Instead you should take our short WWW Primer which explains it nice and simply. Then return here and carry on...
Wednesday, December 31, 2014
My title page contents
Sample text
Sample Text
Blogger Templates
|
19073 Newtown Square
Newtown Township, also commonly referred to as Newtown Square, is a township in Delaware County, Pennsylvania, United States (prior to 1789 it was part of Chester County). Newtown Township is the oldest township in Delaware County. The population was 11,700 as of the 2000 census.
Newtown Township was settled in 1681 and incorporated as a Township in 1684.[1] In 1681, William Penn planned the "first inland town west of Philadelphia" at the intersection of Goshen Road (laid out in 1687) and Newtown Street Road (laid out in 1683). The Township was laid out around a center square, or "townstead" of approximately one square mile surrounded by farmland. Original purchasers of land in the Township received 1-acre (4,000 m2) in the townstead for every ten acres of surrounding farm land. Penn has planned New Town while still in England and was able to sell a considerable number of tracts before leaving England. However, many of these people "...never lived on the land," the properties changed hands many times, and thus, early growth of the Township was slow.
Newtown was organized as the Townstead with the majority of early settlers being Welshman. These Welsh "Friends" (Quakers) needed a road to facilitate their journey to meeting, the only established road at the time being Newtown Street Road, which ran north and south. As such, in 1687, an east-west road was laid out (Goshen Road) so the Friends could attend either Goshen or Haverford Meeting. By 1696, these friends had become numerous enough to hold their own meeting in Newtown and continued to meet in a private home until the completion of the Newtown Friends Meetinghouse in 1711. In the 18th century, Newtown was basically a farming community. Blacksmith and wheelwright shops emerged on the main arteries to service horse and buggy travelers. Taverns and inns were also opened to accommodate local patrons as well as drovers taking their livestock to the markets in Philadelphia.
View Larger Map
|
Friday, August 17, 2007
How did David Copperfield make the Statue of Liberty disappear?
Before David Blaine was the biggest illusionist on the block, David Copperfield amazed audiences with his larger-than-life tricks. Perhaps his most famous stunt was in 1983 when he made the Statue of Liberty disappear (and then reappear) on live TV. The trick may be old, but people still wonder -- how did he do it?
Naturally, Mr. Copperfield isn't telling, so we consulted the research experts at The Straight Dope. They explain a common theory from William Poundstone's "Bigger Secrets." First, "Copperfield had a setup of two towers on a stage, supporting an arch to hold the huge curtain that would be used to conceal the statue." Those viewing the trick, both live and on TV, saw the statue through this arch.
After the curtain closed and while Mr. Copperfield addressed the audience, the stage was apparently rotating very slowly on a lazy Susan type turntable. When the curtain opened, it seemed the statue had disappeared, but in reality, the audience's view was blocked by one of the columns. The article also mentions that Copperfield used very bright lights to "nightblind" the audience. Those magicians are a tricky lot, eh?
Thanks to the magic of the Internet, you can watch the trick again. More than 20 years later, it's still pretty amazing.
No comments:
|
Wednesday, March 19, 2014
Holidays and Observances for March 19 2014
Kick Butts Day
Kick Butts Day is a national day of activism that empowers youth to stand out, speak up and seize control against Big Tobacco. The next Kick Butts Day is March 19, 2014. We're expecting more than 1,000 events in schools and communities across the United States and even around the world.
• Encourage youth to reject the tobacco industry's deceptive marketing and stay tobacco-free; and
While Kick Butts Day is officially held on one day each year, our hope is that every day will be Kick Butts Day in the fight against tobacco. You can use our Activities Database to organize events and take a stand against tobacco on any day of the year. By making every day Kick Butts Day, we can win the fight against tobacco use, the number one cause of preventable death in the United States and around the world.
• Mobilize organizations and individuals to join the fight against tobacco.
• Inform the public, policy makers and the media about tobacco's devastating consequences and the effectiveness of the policies we support.
National Chocolate Caramel Day
It's a sticky situation. We'll deal. March 19 is National Chocolate Caramel Day.
Caramels are soft, chewy, velvety bites of heaven. When they're combined with chocolate, they become a taste sensation second to none.
Luckily , making them at home isn't as daunting as you’d think. Yes, there’s some equipment involved, and yes, there should always be an adult in the room, but the rest is easy.
First you need to decide if you want to make chocolate caramels, or caramels covered in chocolate (the former saves a step meaning you can get to eating the candies quicker).
When it comes to making candy, there are a few things you need to know. First, sugar behaves differently when heated to different temperatures. For example, heat sugar syrup to between 235 and 240 degrees Fahrenheit and you've got soft-ball stage candy. At this stage, the candy can be made into fudge or pralines. Heat it further to between 300 and 310 F and you've got hard-crack stage candy that’s perfect for brittles or lollipops.
Somewhere in the middle is the caramel, or firm-ball stage. When you heat sugar syrup to between 245 and 250 F you’ll get a candy that’s both malleable and firm. Your trusty candy thermometer will tell you when your syrup is at the right temperature. And no, this isn't the same one you use to see if you have a fever or check the doneness of your tenderloin.
To make actual caramel, you have to add cream and butter, and vanilla if you'd like. In most caramel recipes, the cream is added over heat, while the rest of the ingredients are added off the heat.
When you add your cream, the mixture will bubble up the sides of your pot. This is normal, and why it’s important for kids to have an adult present. Once it's off the heat, whisk in butter and chopped chocolate until the mixture is smooth and glossy. Pour it into a greased baking dish and cut it into squares once it's cool. And there you go – chocolate caramels.
Wrap them individually in small pieces of wax or parchment paper and see how long they last. Go ahead, try to eat just one.
National Poultry Day
Have a fowl feast on March 19 because it's National Poultry Day!
You won't have any trouble keeping your options open on this holiday: Poultry, the term used to define domesticated birds that are raised for their eggs and meat, includes chicken, quail, turkey, duck, goose and pheasant.
The first "white meat" (the other being pork) is the second most widely eaten meat in the world. Poultry legs, wings, breasts and thighs are the favorites for consumption, but roasting the whole bird is also an agreeable option, especially around the holidays.
There's more than one way to cook a bird, so roast, bake, fry, grill, stir fry, sauté, glaze, marinate, poach and kebab to your heart's content. Poultry and lemon pair well together, so amp up the flavor in favor of your usual roast chicken and glaze it with lemon curd. It could be life-changing.
Not feeling like serving the bird? It's also National Chocolate Caramel Day, so grab your favorite combination of both or dive in to this mouthwatering chocolate and caramel bread pudding. Whether you go sweet or savory, one of these two is sure to put a smile on your face.
International Client's Day
International clients' day – unofficial holiday when companies' owners and managers thanks their clients. Client day is celebrated each year on 19 March. First time client day was mentioned in 2010 and its authors became Lithuanian and Russian business people. The idea behind the client day is that world has many memorable days, however there is nothing dedicated to clients – most valuable part of any business and organization.
So, each year on 19 March companies‘ owners and managers are invited to join the celebration and to thank their client. Traditionally clients are pleased by discounts, special offers or all kind of events only for them. The day is supported by telecommunication companies, banks, retail stores, government organizations, education institution and more in Lithuania and Russia. Popularity and importance of Client's day is growing year after year and more organizations are joining the day.
|
Thursday, May 15, 2014
Holidays and Observances for May 15 2014
International Day of Families
During the 1980's, the United Nations began focusing attention on the issues related to the family. In 1983, based on the recommendations of the Economic and Social Council, the Commission for Social Development in its resolution on the Role of the family in the development process (1983/23) requested the Secretary-General to enhance awareness among decision makers and the public of the problems and needs of the family, as well as of effective ways of meeting those needs.
In its resolution 1985/29 of 29 May 1985 the Council Invited the General Assembly to consider the possibility of including in the provisional agenda of its forty-first session an item entitled “Families in the development process”, with a view to consider a request to the Secretary-General to initiate a process of development of global awareness of the issues involved, directed towards Governments, intergovernmental and non-governmental organizations and public opinion.
Later, based on the recommendations of the Commission for Social Development formulated in its 30th round of sessions, The Assembly invited all States to make their views known concerning the possible proclamation of an international year of the family and to offer their comments and proposals.
The Council also requested the Secretary-General to submit to the General Assembly at its forty-third session a comprehensive report, based on the comments and proposals of Member States on the possible proclamation of such a year and other ways and means to improve the position and well-being of the family and intensify international co-operation as part of global efforts to advance social progress and development.
In its resolution 44/82 of 9 December 1989, The United Nations General Assembly proclaimed The International Year of the Family. In proclaiming the Year, the General Assembly decided that the major activities for its observance should be concentrated at the local, regional and national levels, assisted by the United Nations system.
The United Nations Commission for Social Development was designated as the preparatory body and the Economic and Social Council as the coordinating body for the Year.
In 1994, The General Assembly proclaimed The International Day of Families, which is observed on the 15th of May every year. This day provides an opportunity to promote awareness of issues relating to families and to increase the knowledge of the social, economic and demographic processes affecting families.
National Chocolate Chip Day
National Chocolate Chip Day is an American food holiday that is celebrated on May 15th and August 4th. The origins of the food holiday are unknown. National Chocolate Chip Day can be celebrated by cooking cookies, muffins, chocolate bars, cupcakes, pancakes, ice cream, and other foods with chocolate chips included.
Chocolate chips were first used in 1937 by Ruth Graves Wakefield. She used pieces of a chocolate bar in the cookie batter she was baking. The recipe was published in several newspapers in New England after it was introduced. In 1939, Nestle began the production and sale of Toll House Chocolate Chips.
Although the history of National Chocolate Chip Day is unknown, chocolate chips were first used in 1937 by Ruth Graves Wakefield. She formed chocolate chips by cutting up some of Nestle's semi-sweet chocolate and including the small pieces of it with the cookie batter she was cooking. Rather than melting as she had expected, the fragments of the bar partially held their shape and had been softened. These chocolate chip cookies were a success, and Wakefield's recipe was published in New England's newspapers.
Afterwards, Nestle and Wakefield reached an agreement that allowed Nestle to publish Wakefield's chocolate chip cookie recipe on the back of their bar wrappings.6 In 1939, Nestle released Nestle Toll House Real Semi-Sweet Chocolate Morsels.
National Chocolate Chip Day can be celebrated by cooking various foods that include chocolate chips. The foods made include cookies, muffins, pancakes, ice cream, brownies, and more. While chocolate chip cookies are a common food chocolate chips are used in, the holiday itself focuses on the chocolate chips themselves, rather than the cookies.
Tuberous Sclerosis Complex Global Awareness Day
On May 15, the Tuberous Sclerosis Alliance (TS Alliance) will join tuberous sclerosis complex (TSC) organizations around the world to observe the third annual TSC Global Awareness Day. On this day, thousands of individuals and families affected by TSC will join together to increase public awareness of the rare disease and share their stories of hope for the future.
This year’s TSC Global Awareness Day will feature a social media campaign called a “World of Thanks,” which will offer people with TSC the opportunity to honor those who have made a positive difference in their lives by posting pictures and stories on a dedicated website at
TSC is a genetic disorder that causes tumors to form in vital organs, primarily in the brain, eyes, heart, kidneys, skin and lungs. It is also the leading genetic cause of both autism and epilepsy. TSC is as common as Lou Gehrig’s disease or cystic fibrosis but is virtually unknown by the general population.
“One million people around the world are estimated to have TSC, with 50,000 or more right here in the United States,” explained Kari Luther Rosbeck, President & CEO of the TS Alliance. “Currently, there is no cure for TSC but ongoing research continues to be promising, not only for this disorder, but also for those suffering from autism, epilepsy and even cancer, because of TSC’s relation to the same genetic pathway.”
“At least two children born each day in the United States will have TSC,” Rosbeck continued. “However, many cases go undiagnosed due to obscurity of the disease, so events such as TSC Global Awareness Day are critically important to educate people about TSC as well as the importance of TSC research and how it relates to other more common diseases throughout the world.”
TSC Global Awareness Day is sponsored internationally by Tuberous Sclerosis Complex International (TSCi), a worldwide consortium of TSC organizations of which the TS Alliance is a member. Formed in 1974, the TS Alliance is dedicated to finding a cure for TSC, while improving the lives of those affected through the stimulation and sponsorship of research; development of programs, support services and resource information; and the development and implementation of public and professional education programs designed to heighten awareness of TSC.
National Nylon Stockings Day
Want to know the man that made millions of women REALLY happy? Back in 1940 it was a DuPont Chemist, named Wallace Carothers who “invented” nylon stockings.
Silk stockings were expensive and until the end of World War II, nylons were scare. But after a debut at the World’s Fair the new stockings returned with a vengeance. Shoppers crowded stores, and one San Francisco store was mobbed by 10,000 anxious shoppers forcing them to halt stocking sales! Women went wild and 8,000 pairs were sold as ladies could ditch their pesky silk stockings and resort to a cheaper and more viable option, nylon! This collision of science and fashion was explosive and the impact has been stated as a “silent synthetic revolution that swept through fashion’s vast empire!
So 67+ years later, we celebrate this great innovation with Nylon Stockings Day! The impact of this new synthetic on fashion was powerful and ongoing. The Fashion industry quickly embraced the new miracle fabrics and design trends were less constrictive, more affordable and more versatile with the use of these new synthetics.
The introduction of nylon to the fashion industry was a game changer and to women all over the world, Wallace Carothers is our hero!
So share with us, “When was the last time you saw your mother, grandmother or yes, an old auntie actually get in to a garter belt, silk stockings and a corset? Can’t imagine how miserable that had to feel! Today, our undergarments are sassy, sexy and oh so much more comfy!
Peace Officers Memorial Day
Peace Officers Memorial Day and Police Week is an observance in the United States that pays tribute to the local, state, and Federal peace officers. The Memorial takes place on May 15, and Police Week is the calendar week in which the Memorial falls.
Straw Hat Day
The exact date of Straw Hat Day varies somewhat in the United States, but May 15th is often cited. Straw Hat Day is the unofficial start of summer and the official start of straw hat season if you are worried about when you could begin wearing yours. And perhaps you should be worried. According to Neil Steinberg's book Hatless Jack, men have been murdered in living memory in the United States for the crime of wearing a hat out of season. Your felt hats should be put away until September 1, which makes perfect sense in most places as straw wears much cooler because it lets air circulate. And protection from the sun seems to be a better reason to wear a hat than winter's 75% of your body's heat loss occurs through your head rationale.
A boater (also straw boater, basher, skimmer, cady, katie, somer, sennit hat, or in Japan, can-can hat) is a kind of men's formal summer hat.
It is normally made of stiff sennit straw and has a stiff flat crown and brim, typically with a solid or striped grosgrain ribbon around the crown. Boaters were popular as casual summer headgear in the late 19th century and early 20th century, especially for boating or sailing, hence the name. They were supposedly worn by FBI agents as a sort of unofficial uniform in the pre-war years. It was also worn by women, often with hatpins to keep it in place. Nowadays they are rarely seen except at sailing or rowing events, period theatrical and musical performances (e.g. barbershop music) or as part of old-fashioned school uniform, such as at Harrow School. Since 1952, the straw boater hat has been part of the uniform of the Princeton University Band, notably featured on the cover of Sports Illustrated Magazine in October, 1955. Recently, soft, thin straw hats with the approximate shape of a boater have been in fashion among women.
Being made of straw, the boater was and is generally regarded as a warm-weather hat. In the days when all men wore hats when out of doors, "Straw Hat Day", the day when men switched from wearing their winter hats to their summer hats, was seen as a sign of the beginning of summer. The exact date of Straw Hat Day might vary slightly from place to place. For example, in Philadelphia, it was May 15; at the University of Pennsylvania, it was the second Saturday in May.
The boater is a fairly formal hat, equivalent in formality to the Homburg, and so is correctly worn either in its original setting with a blazer, or in the same situations as a Homburg, such as a smart lounge suit, or with black tie. John Jacob Astor IV was known for wearing such hats. The silent film comedian Harold Lloyd used the boater, along with horn-rimmed glasses, as his trademark outfit.
Hyperemesis Gravidarum Awareness Day
Hyperemesis Gravidarum (HG) is more than "morning sickness." This little-understood - but debilitating and potentially life-threatening - pregnancy condition affects the health of moms and babies worldwide, causing rapid weight loss, malnutrition, and dehydration.
HG Awareness Day, observed each year on May 15, raises the profile of this condition toward the goal of promoting investment in research, support, resources, and best clinical practices that will improve outcomes for HG women and their children.
• dehydration and production of ketones
• nutritional deficiencies
• metabolic imbalances
• difficulty with daily activities
International MPS Awareness Day
International MPS Day began as a way to honor everyone in the MPS Community, to recognize, remember and rejoice in each other.
On International MPS Day we:
• Remember all the children and adults who suffer from MPS diseases.
• Think about the children we have lost.
• Think about the doctors and scientists who are dedicated to finding a cure for MPS.
• Remember each other and be thankful for the strength and support we both give and receive.
Mucopolysaccharidoses (MPS) diseases are genetic lysosomal diseases (LD) caused by the body's inability to produce specific enzymes. Normally, the body uses enzymes to break down and recycle materials in cells. In individuals with MPS the missing or insufficient enzyme prevents the proper recycling process, resulting in the storage of materials in virtually every cell of the body. As a result, cells do not perform properly and may cause progressive damage throughout the body, including the heart, bones, joints, respiratory system and central nervous system. While the disease may not be apparent at birth, signs and symptoms develop with age as more cells become damaged by the accumulation of cell materials.
|
Monday, 29 July 2013
Complexity? It's All Relative.
• Potential - the ability to evolve, survive
• Functionality - the set of distinct functions the system is able to perform
• Robustness - the ability to function correctly in the presence of endogenous/exogenous uncertainties
Because of the existence of critical complexity, complexity itself is a relative measure. This means that all statements, such as, "this system is very complex, that one is not", are without value until you refer complexity to its corresponding bounds. Each system in the Universe has its own complexity bounds, in addition to its current value. Because of this a small company can, in effect, be relatively more complex than a large one, precisely because it operates closer to its own complexity limit. Let us see a hypothetical example.
Imagine two companies: one is very large, the other small. Suppose each one operates in a multi-storey building and that each one is hiring new employees. Imagine also that the small company has reached the limit in terms of office space while the larger company is constantly adding new floors. This is illustrated in the figure below.
In this hypothetical situation, the smaller company has reached its maximum capacity and adding new employees will only make things worse. It is critically complex and, with its current structure, it cannot grow - it has reached its physiological growth limit and can do two things:
• "Add more floors" (this is equivalent to increasing its critical complexity - one way to achieve this is via acquisitions or mergers)
• Restructure the business
If a growing business doesn't increase its own critical complexity at the appropriate rate it will reach a situation of "saturation". If you "add floors" at a rate that is not high enough, the business will become progressively less resilient and will ultimately reach a situation in which it will not be able to function properly, not to mention facing extreme events.
Complexity is a disease of our modern times (more or less like high cholesterol, which is often consequence of our lifestyles). Globalisation, technology, or uncertainty in the economy are making life complex and it is increasing the complexity of businesses themselves. An apparently healthy business may hide (but not for long!) very high complexity. Just like very high cholesterol levels are rarely a good omen, the same may be said of high complexity. This is why companies should run a complexity health-check on a regular basis.
So, the next time you hear someone say that something is complex, ask them about critical complexity. It's all relative!
Complexity: A Link Between Science and Art?
Serious science starts when you begin to measure. According to this philosophy we constantly apply our complexity technology in attempts to measure entities/phenomena/situations that so far haven't been quantified in rigorous scientific terms. Of course we can always apply our subjective perceptions of the reality that surrounds us so as to classify and rank, for example, beauty, fear, risk, sophistication, stress, elegance, pleasure, anger, workload, etc., etc. Based on our perceptions we make decisions, we select strategies, we make investments. When it comes to actually measuring certain perceptions complexity may be a very useful proxy.
Let's consider, for example, art. Let's suppose that we wish to measure the amount of pleasure resulting from the contemplation of a work of art, say a painting. We can postulate the following conjecture: the pleasure one receives when contemplating a work of art is proportional to its complexity. This is of course a simple assumption but it will suffice to illustrate the main concept of this short note. Modern art produces often paintings which consist of a few lines or splashes on a canvas. You just walk past. When, instead, you stand in front of a painting by, say, Rembrandt van Rijn, you experience awe and admiration. Now why would that be case? Evidently, painting something of the calibre of The Night Watch is not matter of taking a spray gun and producing something with the aid of previous ingested chemical substances. Modern "art" versus a masterpiece. Minutes of delirium versus years of hard work. Splashes versus intricate details. Clear, but how do you actually compare them?
We have measured the complexity of ten paintings by Leonardo da Vinci and Rembrandt. The results are reported below without further comments.
and Rembrandt
Sunday, 28 July 2013
Resilience, put in layman's terms, is the capacity to withstand shocks, or impacts. For an engineers it a very useful characteristic of materials, just like Young's modulus, the Poisson ratio of the coefficient of thermal expansion. But high resilience doesn't necessarily mean high performance, or vice versa. Take carbon fibres, for example. They can have Young's modulus of 700 Gigapacals (GPa) and a tensile strength of 20 GPa while steel, for example, has Young's modulus of 200 GPa and a tensile strength of 1-2 Gpa. And yet, carbon fibers (as well as alloys with a high carbon content) are very fragile while steel is, in general, ductile. Basically, carbon fibres have fantastic performance in terms of stiffness and strength but responds very poorly to impacts and shocks.
What has all this got to do with economics? Our economy is extremely turbulent (and this is only just the beginning!) and chaotic, which means that it is dominated by shocks and, sometime, by extreme events (like the unexpected failure of a huge bank or corporation, or default of a country which needs to be bailed out, like Ireland, Greece, Portugal, or natural events such as tsunamis). Such extreme events send out shock waves into the global economy which, in virtue of its interconnectedness, propagates them very quickly. This can cause problems to numerous businesses even on the other side of the globe. Basically, the economy is a super-huge dynamic and densely interconnected network in which the nodes are corporations, banks, countries and even single individuals (depending on the level of detail we are willing to go to). It so happens that today, very frequently, bad things happen at the nodes of this network. The network is in a state of permanent fibrillation. It appears that the intensity of this fibrillation will increase, as will the number of extreme events. Basically, our global economy will become more and more turbulent. By the way, we use the word 'turbulence' with nonchalance but it is an extremely complex phenomenon in fluid dynamics with very involved mathematics behind it - luckily, people somehow get it! And that's good. What is not so good is that people don't get the concept of resilience. And resilience is a very important concept not just in engineering but also in economics. This is because in turbulence it is high resilience that may mean the difference between survival and collapse. High resilience can in fact be seen as s sort of stability. It is not necessary to have high performance to be resilient (or stable). In general, these two attributes of a system are independent. To explain this difficult (some say it is counter-intuitive) concept, let us consider Formula 1 cars: extreme performance, for very short periods of time, extreme sensitivity to small defects with, often, extreme consequences. Sometimes, it is better to sacrifice performance and gain resilience but this is not always possible. In Formula 1 there is no place for compromise. Winning is the only thing that counts.
But let's get back to resilience versus performance and try to reinforce the fact that the two are independent. Suppose a doctor analyzes blood and concentrates on the levels of cholesterol and, say, glucose. You can have the following combinations (this is of course a highly simplified picture):
Cholesterol: high, glucose: low
Cholesterol: low, glucose:high
Cholesterol: low, glucose: low
Cholesterol: high, glucose: high
You don't need to have high cholesterol to have high glucose concentration. And you don't need to have low glucose levels to have low levels of cholesterol.
Considering, say, the economy of a country, we can have the following conditions:
Performance: high, resilience: low
Performance: low, resilience:high
Performance: low, resilience: low
Performance: high, resilience: high
Just because the German economy performs better than that of many countries it doesn't mean it is also more resilient. This is certainly not intuitive but there are many examples in which simplistic linear thinking and intuition fail. Where were all the experts just before the sub-prime bubble exploded?
Measuring the Complexity of Fractals
We don't need to convince anyone of the importance of fractals to science. The question we wish to address is measuring the complexity of fractals. When it comes to more traditional shapes, geometries or structures such as buildings, plants, works of art or even music, it is fairly easy to rank them according to what we perceive as intricacy or complexity. But when it comes to fractals the situation is a bit different. Fractals contain elaborate structures of immense depth and dimensionality that is not easy to grasp by simple visual inspection or intuition.
We have used OntoNet to measure the complexity of the two fractals illustrated above. Which one is the most complex of the two? And by how much? The answer is the following:
Fractal on the left - complexity = 968.8
Fractal on the right - complexity = 172.9
This means that the first fractal is about 5.6 times more complex. At first sight this may not be obvious as the image on the right appears to be more intricate with much more local detail. However, the image on the left presents more global structure hence it is more complex. The other image is more scattered with smaller local details and globally speaking it is less complex . This means that it transmits less structured information, which is precisely what complexity quantifies. Finally, below we illustrate the complexity map of the fractal on the left hand side.
Saturday, 27 July 2013
Is it progress if a cannibal uses knife and fork?
The North American XB-70 Valkyrie was a (beautiful!) tri-sonic bomber developed in the early sixties. It was, even by today's standards, an exceptionally sophisticated and advanced machine. What strikes is the short time it took to develop and build - we intentionally omit the information here since, by today's standards it would make many (aerospace) engineers blush. Why is it that in the days of slide-rules they could beat today's supercomputers and all other technological goodies? A few reasons are:
• One company did everything - there was no useless geographical dispersion, management was simpler.
• Complex systems have been successfully built even though complexity was not a design goal - the reason is that product development and manufacturing was much less complex.
• Engineers were better trained than today - they understood mechanics better that today's youngsters.
• The average age of engineers was much higher than today.
• Companies had clearer roadmaps and were more motivated - today it's all about shareholders value not about building great planes.
• Aerospace companies were run by people who understood the business.
• Designers didn't have to struggle with super-complex super-huge software systems.
• Companies were profitable (because they were run by people who understood the business) hence were not forced to squeeze every penny out of sub-contractors, causing them to deliver worse results ....
• Because of the above, companies could do plenty of R&D - today, R&D is where (incompetent) management make the first cost cuts.
Today, highly complex products are engineered without taking complexity into account. When coupled with extremely complex and dispersed multi-cultural manufacturing, assembly, procurement, design and management issues you run into trouble if you don't keep complexity under control. It is not surprising that TODAY people doubt that man ever went to the Moon! In fact, those who make similar claims cannot possibly conceive of such a complex project being viable because of their poor preparation.
Friday, 26 July 2013
Do We Really Understand Nature?
According to the Millennium Project the biggest global challenges facing humanity are those illustrated in the image above. The image conveys a holistic message which some of us already appreciate: everything is connected with everything else. The economy isn't indicated explicitly in the above image but, evidently, it's there, just as the industry, commerce, finance, religions, etc. Indeed a very complex scenario. The point is not to list everything but to merely point out that we live in a highly interconnected and dynamic world. We of course agree with the above picture.
As we have repeatedly pointed out in our previous articles, under similar circumstances:
• it is impossible to make predictions - in fact, even the current economic crisis (of planetary proportions) has not been forecast
• only very rough estimates can be attempted
• there is no such thing as precision
• it is impossible to isolate "cause-effect" statements as everything is linked
• optimization is unjustified - one should seek acceptable solutions, not pursue perfection
The well known Principle of Incompatibility states in fact that "high precision is incompatible with high complexity". However, this fundamental principle, which applies to all facets of human existence, as well as in Nature, goes unnoticed. Neglecting the Principle of Incompatibility constitutes a tacit and embarrassing admission of ignorance. One such example is that of ratings. While the concept of rating lies at the very heart of our economy, and, from a point of view of principle, it is a necessary concept and tool, something is terribly wrong. A rating, as we know, measures the Probability of Default (PoD). Ratings are stratified according to classes. One example of such classes are shown below:
Class PoD
1 =<0.05%
2 0.05% - 0.1%
3 0.1% - 0.2%
4 0.2% - 0.4%
5 0.4% - 0.7%
6 0.7% - 1.0%
A rating affects the way stocks of a given company are traded - this is precisely its function. What is shocking in the above numbers, however, is the precision. A PoD of 0.11% puts a company in class 3, while a 0.99 in class 2. How can this be so? Isn't the world supposed to be a highly complex system? Clearly, if even a crisis of planetary proportions cannot be forecast, it not only points to high complexity (see the Principle of Incompatibility) but it also says a lot about all the Business Intelligence technology that is used in economics, finance, or management and decision making. So, where does all this precision in ratings come from? From a parallel virtual universe of equations and numbers in which everything is possible but which, unfortunately, does not map well onto reality. But the understanding of the real universe cannot be based on a parallel virtual universe which is incorrect.
The above example of PoD stratification reflects very little understanding of Nature and of its mechanisms. In fact, economic crises of global proportions suddenly happen. As Aristotle wrote in his Nikomachean Ethics: an educated mind is distinguished by the fact that it is content with that degree of accuracy which the nature of things permits, and by the fact that it does not seek exactness where only approximation is possible.
Driving Complexity-To-Target: Application to Portfolio Design
Driving the complexity of a given system to a prescribed target value has numerous applications, ranging from engineering (who wouldn't want a simpler design that performs according to specs?) to management, advanced portfolio design, wealth management or investment strategy.
But more than just complexity it is also the robustness of systems that is of most concern. When considering portfolios both diversification and volatility are of concern
We know that in system design (and this applies to portfolios) the mini-max principle, whereby you maximise something (e.g. the expected return) while minimising at the same time something else (e.g. risk) leads to inherently fragile solutions. Taking simultaneously many things to the limit is of course possible but the price one pays is a rigid and fragile solution: you basically push yourself into a very tight corner of the design space where you have little margin of manoeuvre in case things go wrong. And things do go wrong. Especially if you think that most things in life are linear and follow a Gaussian distribution you should prepare yourself for a handful of surprises.
Portfolio diversification and design can be accomplished differently based on complexity and, in particular, on these two simple facts:
• High complexity increases exposure - a less complex portfolio is better than a more complex one.
• A less complex portfolio accomplishes better diversification (more or along the lines of the MPT and Markowitz logic).
Let us see an example. Suppose you want to build a portfolio based on the Dow Jones Industrial Average Index and its components. Without going into unnecessary technicalities, below is an example of our first portfolio. We observe that:
Its complexity is 64.3 (pretty close to the critical value of 68.75)
Entropy is 823
Robustness is 66.8%
Rating: 2 stars
Nothing to celebrate.
Suppose now that you wish to increase the robustness to, say, 85%. Using our Complexity-To-Target Technology it is possible to "force" the robustness of the portfolio to this target value. Since robustness and complexity are linked it is possible to do this either for robustness or complexity or even both. The new portfolio is illustrated below.
Complexity is now 50.9
Entropy is 542 - this tells us that the behaviour of the portfolio is substantially more predictable
Robustness is 84.9%
Rating: 4 stars
The hubs of the portfolio (red discs) have now changed but that is another matter.
Thursday, 25 July 2013
Software Complexity and What Brought Down AF447
After the recovery of the black boxes from the ill-fated Air France flight 447, it has been concluded that pilot error, coupled with Pitot-tube malfunction have been the major causes of the tragedy. It appears, however, that this is yet another "loss of control" accident. Based on black box data, the aircraft stalled at very high altitude. But, you cannot stall an A330. By definition. The airliner (and many other fly-by-wire aircraft) is software packed to such an extent that it won't let you stall it even if you wanted to commit suicide. That's the theory. But in reality, you don't fly an airliner - you fly the software. The degree of automation is phenomenal. That is precisely the problem.
Pilots say that they have become button pushers. Here are some comments on the AF447 accident taken verbatim from a Professional Pilots blog:
"We need to get away from the automated flight regime that we are in today."
"Pilots must be able to fly. And to a better standard than the autopilot!"
"To be brutally honest, a great many of my co-pilot colleagues could NOT manage their flying day without the autopilot. They would be sorely taxed."
"It will cost a lot of money to retrain these 'button pushers' to fly again, ..."
"It appears as if the sheer complexity of the systems masked the simplicity of what was really going on. "
"Just so I understand correctly, then there is no way to take direct control of the aircraft unless the computer itself decides to let you, or perhaps more correctly stated, decides you should. Sounds like Skynet in "The Terminator". "
This accident is a very complex one. It is not going to be easy to understand why the plane really came down. It will take time to analyse the data thoroughly and to understand why highly trained pilots pulled the nose up when the stall alarm went off. The theory is that they must have received a large volume of information of very highly confusing nature in order to do so. Apparently, they managed to crash a flyable aircraft.
We have our own view as to the nature of the problem, not to its cause. We believe that it is the excessive complexity of the system that is to be blamed. Modern aircraft carry over 4 million lines of code. That is a huge amount of real-time code. The code, organised into modules, runs in a myriad of modes: "normal law", "alternate law", " approach", "climb", etc., etc. The point is however this. No matter what system you're talking of, high complexity manifests itself in very unpleasant manner - the system is able to produce surprising behaviour. Unexpectedly. In other words, a highly complex system can suddenly switch mode of behaviour, often due to minute changes of its operating conditions. When you manage millions of lines of code, and, in addition, you feed into the system faulty measurements of speed, altitude, temperature, etc., what can you expect? But is it possible to analyse the astronomical number of conditions and combinations of parameters that a modern autopilot is ever going to have to process? Of course not. The more a SW module is sophisticated - number of inputs, outputs, IF statements, GOTO, read, write, COMMON blocks, lines of code, etc., etc. - the more surprises it can potentially deliver. But how can you know if a piece of SW is complex or not? Size is not sufficient. You need to measure its complexity before you can say that it is highly complex. We have a tool to do precisely that - OntoSpace. It works like this. Take a SW module like the one depicted below.
It will have a certain number of entry points (inputs) and produce certain results (outputs). The module is designed based on the assumption that each input will be within certain (min and max) bounds. The module is then tested in a number of scenarios. Of great interest are "extreme" conditions, i.e. situations in which the module (and the underlying algorithms) and, ultimately the corresponding HW system in question is "under pressure". The uneducated public - just like many engineers - believe that the worst conditions are reached when the inputs take on extreme (min or max) values. This is not the case. Throw at your SW module hundreds of thousands or millions of combinations of inputs - you can generate them very efficiently using Monte Carlo Simulation techniques - and you will see extreme conditions, which do not involve end values of the inputs, to emerge by the dozens. And once you have the results of a Monte Carlo sweep just feed them into OntoSpace. An example with 6 inputs and 6 outputs is shown below.
The module, composed of four blocks (routines) has been plugged into a Monte Carlo loop (Updated Latin Hypercube Sampling has been used to generate the random values of the inputs). As can be observed the module obtains a 5-star complexity rating. Its complexity is 24.46. The upper complexity bound - the so-called critical complexity - is equal to 34.87. In the proximity of this threshold the module will deliver unreliable results. Both these values of complexity should be specified on the back of every SW DDD or ADD (Detailed Design Document and Architectural Design Document). So, this particular module is not highly complex. The idea, of course, is simply to illustrate the process and to show a Complexity Map of a SW module. In other words, we know how to measure the complexity of a piece of SW and to measure its inclinations to misbehave (robustness).
But how complex is a system of 4 million lines of code? Has anyone ever measured that? Or its capacity to behave in an unexpected manner? We believe that the fate of AF447 was buried in the super-sophisticated SW which runs modern fly-by-wire airliners and which has the hidden and intrinsic ability to confuse highly trained pilots. You simply cannot and you should not design highly sophisticated systems without keeping an eye on their complexity. Imagine purchasing an expensive house without knowing what it really costs or embarking on a long journey without knowing how far you will need to go. If you design a super sophisticated system and you don't know how sophisticated is really is it will one day turn its back on you. It sounds a bit like buying complex derivatives and seeing them explode (or implode!) together with your favourite bank. Sounds familiar, doesn't it?
Wednesday, 24 July 2013
Model-free methods - a new frontier of science
When we make decisions or when we think our brain does not use any equations or math models. Our behaviour is fruit of certain hard-wired instincts and experience that is acquired during our lives and stored as patterns (or attractors). We sort of "feel the answer" to problems no matter how complex they may seem but without actually computing the answer. How can that be? How can a person (not to mention an animal) who has no clue of mathematics still be capable of performing fantastically complex functions? Why doesn't a brain, with its immense memory and computational power, store some basic equations and formulae and use them when we need to make a decision? Theoretically this could be perfectly feasible. One could learn equations and techniques and store them in memory for better and more sophisticated decision-making. We all know that in reality things don't work like that. So how do they work? What mechanisms does a brain use if it is not math models? In reality the brain uses model-free methods. In Nature there is nobody to architecture a model for you. There is no mathematics in Nature. Mathematics and math models are an artificial invention of man. Nature doesn't need to resort to equations or other analytical artifacts. These have been invented by man but this doesn't mean that they really do exist. As Heisenberg put it, what we see is not Nature but Nature exposed to our way of questioning her. If we discover that "F = M * a" that doesn't mean that Nature actually computes this relationship each time a mass is accelerated. The relationship simply holds (until somebody disproves it).
Humans (and probably also animals) work based on inter-related fuzzy rules which can be organised into maps, such as the one below. The so-called Fuzzy Cognitive Maps are made of nodes (bubbles) and links (arrows joining the bubbles). These links are built and consolidated by the brain as new information linking pairs of bubbles is presented to us and becomes verifiable. Let's take highway traffic (see map below). For example, a baby doesn't know that "Bad weather increases traffic congestion". However, it is a conclusion you arrive at once you've been there yourself a few times. The rule gets crystallised and remains in our brain for a long time (unless sometimes alcohol dissolves it!). As time passes, new rules may be added to the picture until, after years of experience, the whole thing becomes a consolidated body of knowledge. In time, it can suffer adjustments and transformations (e.g. if new traffic rules are introduced) but the bottom line is the same. There is no math model here. Just functions (bubbles) connected to each other in a fuzzy manner, the weights being the fruit of the individuals own experience.
As a person gains experience, the rules (links) become stronger but, as new information is added, they can also become more fuzzy. This is the main difference between a teenager and an adult. For young people - who have very few data points on which to build the links - the rules are crisp (through two data point a straight line passes, while it is difficult for 1000 points to form a straight line - they will more probably form something that looks like a cigar). This is why many adults don't see the world as black or white and why they tend to ponder their answers to questions. Again, the point is that there is no math model here. Just example-based learning which produces sets of inter-related Fuzzy Cognitive Maps that are stored in our memory. Clearly, one may envisage attaching a measure of complexity to each such map.
OntoSpace, our flagship product, functions in a similar manner. It doesn't employ math models in order to establish relationships between the parameters of a system or a process. Essentially, it emulates the functioning of the human brain.
Is it Possible to Make Predictions?
Much of the contemporary "predictive machinery" is based on statistics - looking back in time, building some model of what has actually happened, extrapolating into the future. The concept of probability plays a central role here. Bertrand Russel is known to have said, back in 1929, that "probability is the most important concept in modern science, especially as nobody has the slightest notion what it means". In fact, probability is not a physical entity and it is not subjected to any laws in the strict scientific meaning. As a matter of fact, there are no laws of probability. If a future event will take place, it will do so irrespective of the probability that we may have attached to it. If an extremely unlikely event will happen, it's probability of occurrence is already 100%.
In actual fact, we still don't even really understand the crisis and its multiple causes. But how can one speak of predicting phenomena which are poorly understood? Shouldn't we change the order of things? Shouldn't we try to first understand better the dynamics of highly complex interconnected and turbulent economic systems and devote less resources to fortune telling and high-tech circle-squaring? How about:
• Developing a new kind of maths, which is less "digital" and closer to reality.
FREE account for Measuring Business Complexity and Rating
If you wish to measure the complexity of your business, or assess its Resilience Rating, just follow these instructions:
1. go to
2. login as User: freerating Pwd: freerating
Don't forget to read the short tutorial!
Monday, 22 July 2013
Beyond the concepts of Risk and Risk Management
The current economic crisis indicates that conventional risk assessment, rating and management techniques don’t perform well in a turbulent and global environment. AAA-rated companies and banks have suddenly failed, demonstrating the limitations of not only risk management techniques but also the need to re-think the expensive and sophisticated Business Intelligence and Corporate Performance Management infrastructure that modern corporations have relied on. But what are the origins of the financial meltdown that is spilling over into the real economy? Why is the economy increasingly fragile? We identify three main causes: excessively complex financial products, globalized financial markets that lack regulations and usage of subjective computational models that are naturally limited to less turbulent scenarios.
Models are only Models. No matter how sophisticated, a model is always based on a series of assumptions. More sophistication means more assumptions. Classical risk evaluation models, because of their subjective nature, are inherently unable to capture the unexpected and pathological events that have punctuated human history, not to mention the economy. But there is more. Conventional Business Intelligence is unable to cope with the hidden complexity of a modern global corporation precisely because it thrives on unrealistic mathematical models. Once defined, a model is condemned to deliver only what has been hard-wired into its formulation. However, a difficulty in analysing our inherently turbulent economy and, more specifically, financial instabilities, lies in the fact that most of the crises manifest themselves in a seemingly unique manner. Life very rarely follows a Gaussian distribution and the future is constantly under construction.
Excessively complex financial products have spread hidden risks to every corner of the globe. Their degree of intricacy is such that they are often beyond the control of those who have created them. Derivatives of derivatives of derivatives …. The speculative use of such products creates an explosive mixture. Because of the global nature of our economy, and due to its spectacular degree of interconnectedness, such products are an ideal vehicle for creating and transmitting uncertainty.
Uncertain and global economy. It is because of the laws of physics that our economy is increasingly uncertain, unstable and interconnected. This means that it is becoming increasingly complex and turbulent. Conventional methods that rely on mathematical models are unable to capture and embrace this complexity, not to mention predict crises. The increase of complexity is inevitable and globalization is an inevitable consequence of the growth of complexity.
Complexity is a fundamental property of every dynamical system. Like many things, it can be managed provided it can be measured. As for most things in life, when managed, complexity becomes an asset. When ignored, it becomes a liability, a time bomb. Because of the laws of physics, the spontaneous increase of complexity in all spheres of social life is inevitable. Like for most things in life, every system possesses its own maximum level of sustainable complexity. Close to this limit, known as critical complexity, it becomes fragile, hence vulnerable. This is the fundamental reason why each corporation should know its value of complexity, as well as the corresponding critical value.
Complexity can be measured. Ontonix is the first company to have developed and marketed a radically innovative and unique technology for rational quantification and management of complexity. Introduced in 2005, OntoSpace™, our flagship product, is the World’s first complexity management system. While others struggle with definitions of complexity, we have been measuring the complexity of banks, corporations, financial products, mergers, or crises already since 2005. Our complexity measure is objective. It is natural. No fancy mathematics, statistics or exotic models. A 100% model-free approach guarantees an objective look at a corporation.
Hidden and growing complexity is the main enemy of a corporation. A corporation may still be profitable but close to default. Highly complex systems are difficult to manage and may suddenly collapse. Excessive complexity is the true source of risk.
Critically complex systems become almost impossible to manage, hence are vulnerable and greatly exposed to both internal and external sources of uncertainty.
Complexity X Uncertainty = Fragility™. This simple yet fundamental equation has been coined by Ontonix and establishes the philosophy and logic behind our technology and services offering. The bottom line is simple: a complex business process, operating in an uncertain environment, is a fragile mix. Since the uncertainty of the global economy cannot be easily altered, in order to operate at acceptable levels of fragility one must necessarily reduce the complexity of the corresponding business model. Based on this logic Complexity Management goes beyond Risk Management and establishes a new underlying paradigm for a superior and holistic form of Business Intelligence. A technology of the Third Millennium.
Conventional techniques provide insufficient to insure against all future contingencies.
There are numerous recent examples of AAA-rated corporations that have suddenly defaulted or are in serious difficulty. The collapse of the Lehman Brothers Bank is a prominent case. Based on the financial highlights of the bank in the period 2004-2008, our analysis has indicated how a quickly increasing complexity provided crisis precursors, hinting more than a year before default that the system was in difficulty. Evidently, the management was unaware that complexity was sky-rocketing as it is invisible to conventional methods.
The bottom line: manage complexity.
|
Monday, May 6, 2013
Most, if not all, of the books that we have read this semester focus almost exclusively on the human stories generated by moving through or adventuring into the wilderness. Even Norgay's Touching my Father's Soul, which examined the impact, both positive and negative, of European climbing in the Himalayas on the Sherpa community was primarily anthropocentric. Although Norgay approached Everest in a more respectful way because, as a man rediscovering his Buddhist roots, he believed that the mountain was a goddess, he rarely, if ever, mentioned the environmental impact that climbing has had on Everest. There was no mention of trash or the need to remove it. Rowing to Latitude, however, with its eloquent and lengthy passages describing the wilderness and signs of human habitation that Fredston and Dave pass through stands in sharp contrast to the other books we have read. Unlike the other authors we have read, Fredston makes it a central purpose of her book to describe the environments that she and Doug have rowed through and the insights that those journeys have given them into the impact of anthropocentrism and human expansion have had not only on the indigenous peoples they meet but on the environment at large. In doing so, she recognizes and attempts to show her reader that virgin wilderness no longer exists. Even the preserves, where human contact technically should not have occurred, are littered with the refuse of human activity.
While Fredston and Doug saw hints of human activity even in the most remote parts of Canada and Alaska, they were largely able to avoid it. The abundance of wildlife helped to create the illusion that they were traveling through a virgin wilderness, even as they encountered stone tools, dilapidated cabins, and rusted barrels. Norway, however, shatters this illusion completely. Whereas Fredston and Doug were often able to pretend that they were the first to see the landscape they traveled through in Norway, as Doug writes, "we always have the feeling that everything has been discovered once, twice, hundreds, maybe thousands of times before. We need a few more secrets" (216). The frequent presence of sheep and other domesticated animals, apart from driving Fredston bonkers when they camp, further emphasizes that Norway has no unadulterated nature left to offer them. As such, Dough ironically writes, "Mostly we see sheep (even in nature preserves!) and cows and salmon farms, though on a good day, we might come across a land otter or mink, a seal, and a couple of deer" (216). This domesticated wilderness could not be more different from their trips along the Canadian and Alaskan coasts, where they were often discovered and examined by bears in the middle of the night. Although their journey through Norway disheartens them in many ways, it teaches them a valuable lesson: "It made us realize that, like the perpetually grazing sheep, centuries of human habitation have nibbled away not only at the earth but at our perception of what constitutes nature. When we do not miss what is asent because we have never known it to be there, we will have lost our baseline for recognizing what is truly wild. In its domestication nature will have become just another human fabrication" (217). Norway illustrates the dangers posed by continual human expansion; however, Fredston goes beyond a simple environmental message. She powerfully suggests that human expansion will pose not only a danger to the wilderness and wildlife, but to humans as well. The development of Alaska, she shows, not only diminishes the habitat and quality of life of wild animals, but it also does so for humans by shrinking the amount of wilderness available for human exploration. This, in turn, irrevocably alters how people interact with one another in the wilderness: "In Alaska, meeting another group of paddlers used to be an occasion to socialize. But now it is not uncommon for two groups of paddlers camped on opposite ends of a beach to adopt the same avoidance behavior. As development shrinks the open spaces and technology makes the remaining spaces more accessible, this may become a standard coping mechanism. We will have replaced the privilege of solitude with isolation" (220). Thus, Rowing to Latitude becomes a chronicle of the changes wrought by humans on the natural world and the dangers that this poses to nature and to humans as well. In destroying or minimizing wilderness, people are destroying or minimizing their abilities to escape from everyday life and to explore not only the wilderness but themselves and the people they are with.
1. I agree that Fredston is unique among the author's we've read in her vivid depictions of the world around her, and her desire to give a voice to the voiceless. One passage that stood out to me in particular was where she expresses her desire to "give voice to the caribou that graze without fear along the Labrador shore, to the wide-shouldered brown bears of the Alaska Peninsula who depend upon the annual migration of salmon, to fjords uncut by roads and power lines" (xvi). While it is slightly problematic that Fredston is anthropocentrically giving human voices to non-human beings, I think her desire to write for other beings and not just herself is pretty respectable.
2. I really enjoyed Fredston's societal critiques and environmental standpoint. Perhaps I'm swayed because I am an Environmental Studies major, but I can't imagine spending such focused time out in nature as the people we've read about this semester have done and not considering the anthropogenic threats to our earth that threaten the viability of pristine locations of "wilderness." Honestly, I'm surprised more of the narratives we've read haven't focused more on environmental issues and land preservation. (I realize I'm heavily biased, but still!) Not to say other authors didn't lament the commercialization of Everest and degradation of other landscapes, but Fredston's critique is more apparent.
To me, Fredston is arguably the most admirable adventurer we've read this semester--not only because of her concerns for social and environmental justice, but also for her priorities. I'm glad we're ending the term with a pair who adventure because they love exploring on the water while discovering themselves and strengthening their relationship. The simplicity in these motivations is beautiful and puts our semester into perspective; adventuring is incredibly personal and can bring about huge accomplishments while providing moments for introspection. For example, during one of Fredston's asides into environmental philosophy she says, "but these were thoughts that would surface once I was back on the river" (94). Rowing allows her to think and process her thoughts.
|
Pin Me
CSS Opacity - Web Development 101
written by: Mustavio•edited by: Simon Hill•updated: 3/29/2010
The CSS opacity property allows you to modify the transparency of any element on your web pages, from images to colored boxes. This makes CSS opacity an extremely useful way to guide your user's attention to the important parts of your web pages, improving the user experience.
• slide 1 of 2
How to use CSS Opacity
CSS opacity can be a difficult property to implement, because there are three completely different syntaxes for the property across various browsers. For CSS opacity to properly function for any browser that your visitors might be using, you must use all three of these tags on your pages, and list them in the correct order. To start with, the CSS property for Internet Explorer versions 5 through 7 is defined as:
filter: alpha(opacity=x);
The x in the above example is where you define the level of opacity or transparency for whichever element you are assigning this property. This value can be any number from 0 to 100. A value of 0 means that the element will be completely transparent when rendered on your page. A value of 100 means that the element will have no transparency, which is the default behavior. For Internet Explorer 8, the syntax for the CSS opacity property is:
Again, x can be any value from 0 to 100 where 0 is completely transparent and 100 is completely opaque. Note that for CSS opacity to work in all versions of Internet Explorer, this definition should come first in your style sheets. The syntax for CSS opacity in Firefox, Chrome, Safari and most other browsers is by far the easiest of the three, simply:
opacity: x;
Where x is a number between 0.0 and 1.0. Similar to the above examples, a value of 0 here will make the element completely transparent, and a value of 1 will make the element completely opaque. Luckily, the scale for CSS opacity is identical across all the browsers, so if you want consistent behavior, you should take your opacity value for Internet Explorer and divide by 100 to get your Firefox value, or multiply your Firefox value by 100 to get your Internet Explorer value. That is, a value of .4 for Firefox will render identically to a value of 40 for Internet Explorer. If the Internet Explorer 8 property is placed after the property for Internet Explorer 5-7, then opacity will not work for Internet Explorer 8 running as Internet Explorer 7.
• slide 2 of 2
CSS Opacity Examples and Uses
Let's jump right in with an example. Presume that we want to make a <div> on our page 50% transparent. First we create our CSS opacity class in our stylesheet, recalling that the definition for Internet Explorer 8 should come first. This would look like:
.transparent { -ms-filter:"progid:DXImageTransform.Microsoft.Alpha(Opacity=50)"; filter: alpha(opacity=50); opacity: .5; }
Now let's create two <div> elements, and apply the CSS opacity class to one of them: CSS Opacity Example
<div style="background-color:red;width: 200px; height: 100px;"></div>
This would be displayed on your web page as the image to the right. While this example is pretty boring, it should get your mind working on how this property can be used to improve your web pages. One very common example is to combine CSS opacity with CSS mouseover, so that the transparency of an element changes when the user places their mouse over that element. This could be used on menu items, to give the user of your web pages a feeling of emphasis and interactivity. CSS opacity can also be used to fade out unimportant parts of your web pages, for example to 'dim the lights' when you are streaming a video.
|
When the Government Owns Your DNA, They Own You!
dnaIf newborn DNA is sequenced into a numerical format, the government will soon have and own each child’s complete genetic code allowing them to make all manner of inferences about future medical conditions of not only that child, but future offspring of that child. According to Francis Collins, the current head of the National Institutes of Health, “whether you like it or not, a complete sequencing of newborns is not far away.”
While it is not unusual for newborn DNA, obtained either via neonatal methods or soon after birth, to be tested for potential genetic problems, several states are now storing the DNA and even turning it over for research purposes. A February report in the Texas Tribune revealed that the Texas Department of State Health Services was giving hundreds of infant blood spots to the Armed Forces DNA identification lab who is in the process of building a national mitochondrial DNA registry.
Nudged by The Newborn Screening Saves Lives Act of 2007, the government is offering federal funds to states that collect, store and share your babies DNA, usually obtained without parental knowledge or consent.
Government ownership of your child’s DNA enables State Health Departments and future legislatures to use that DNA as they see fit. And, according to Twila Brase, president of the Citizens’ Council on Health Care, the only reason to store newborn DNA long-term is to conduct genetic research. “Every American child will grow into an adult whose DNA is owned and accessible to the government for research, law enforcement, predictive analysis, and social and genetic engineering.”
With the passage of ObamaCare and its death panel, along with the far-left’s push for a government single payer health care system, the federal government could implement policies that use the genetic screening to force abortions to reduce the birth of children with costly medical conditions, or perhaps, children of the wrong color, or social status.
While some states are considering ways to prevent the abuse of this genetic information, the best action, according to Brase, “would be to take newborn screening out of state health departments. Newborn screening should be a hospital procedure, not a government procedure. If states never get the blood, they would never be able to store it, use it, or share it.”
Government storage of your child’s DNA and genetic tests without parental consent violates genetic privacy rights, parental rights, patient’s rights, property rights and Fourth Amendment rights. It is up to parents to pressure their local state legislature to pass laws requiring informed consent before it’s too late.
But it is not just children facing the threat of ending up in the DNA registry. Obama wants one and we know he usually gets what he wants, even if he is ”forced” to use an executive order. While DNA is regularly collected from those that commit serious crimes, some states are passing laws requiring DNA collection from those charged with minor low level crimes. It may well just be a matter of time before the government requires DNA to get a driver’s license or marriage license in order to build a national DNA registry of everyone in the country.
In a recent Supreme Court decision that essentially legalized the national DNA database, Justice Scalia wrote “it may be wise, as the Court obviously believes, to make the Leviathan all-seeing, so that he may protect us all the better. The proud men who wrote the charter of our liberties would not have been so eager to open their mouths for royal inspection.”
Print Friendly
Leave a Reply
|
Methamphetamines FAQ:
Meth is made with Toxic Chemicals
What is meth?
Methamphetamine, or "meth", is a very potent CNS (Central Nervous System) stimulant that stimulates the brain and spinal cord by changing the normal ways that the CNS metabolizes neurotransmitters. Neurotransmitter are so named because they are specific chemicals that relay messages between nerve cells.
The predominant neurotransmitter that is affected by meth is a chemical called dopamine. Dopamine levels in the brain are part of our natural reward system. When you do something that supports your survival and, therefore, feel good, you are experiencing the effects of dopamine within your CNS.
Methamphetamine is a synthesized chemical that was overly used in the Second World War because the army didn't understand the negative effects that come from the use of this drug, but only recognized that the troops could work harder and longer under its influence. Methamphetamines are highly addictive and are becoming one of the most abused drugs in America. There are legal pharmaceutical preparations of methamphetamine that are prescribed for specific ailments, but most of the meth found on the streets are made from makeshift laboratories that combine ephedrine with other toxic chemicals that produce a drug that has more side effects than does the commercially produced product.
What are the slang terms for meth?
There are many street names for meth, depending on the area of the country, these are some of the most common: speed, ice, crank, chalk, and zip. Some of the names are descriptive of the type of meth, for instance, pure methamphetamine hydrochloride, which is a form of the drug that is easily smoked is called L.A. and when it has the clear look of frozen water it is called "ice or crystal", hence the common term "crystal meth". Much of the crystal meth is smuggled to the U.S. from Taiwan and South Korea.
How is meth manufactured?
Meth Making Lab
Unfortunately, meth is easily produced in makeshift laboratories that are easily set up from instructions on the Internet. There are literally hundreds of Internet articles that teach someone different means by which to make meth from the essential raw product of ephedrine or pseudoephedrine. Ephedrine is an over-the-counter cold medication that can be purchased at any pharmacy. A person that wants to make methamphetamines can invest a few hundred dollars in equipment and chemicals to synthesize thousands of dollars of meth. It is also reported that every person that makes meth teaches an average of ten others how to do the same.
Where are these meth labs located?
The simple answer, for the United States, is almost anywhere. Arrests have been made in urban and rural areas alike. They have been found in vacant buildings, motel rooms, barns, storage buildings and in people's cars. It doesn't take a sophisticated laboratory to make meth with these makeshift labs, so they are usually putting them in a place where they believe they won't be detected by law enforcement and others that could stop their clandestine activities.
Buying Methamphetamine
Is making meth in a homemade lab dangerous?
The process of changing ephedrine is a volatile chemical process that can cause harm in many ways, but the most common danger comes from the solvents that are used in the process. There are many different recipes for making meth, but they all involve chemicals that have poisonous and/or flammable fumes. Some of the most common solvents are acetone, ammonia, benzene, ether, hydrochloric acid and muriatic acid. All of these chemicals have dire consequences for anyone that may come in contact with them or their fumes.
What is the cost of methamphetamines on the street?
The cost of meth varies according to the demands of the market and purity of the meth being sold. The approximate price of one gram of meth being sold on the streets is about one hundred dollars. One gram of meth equals about fifteen to twenty "hits" of this drug.
Who are the typical users of methamphetamines?
It is difficult to pinpoint a typical user since meth use transcends specific demographics, but most of the meth is sold to people in their 20s and 30s from many walks of life. 35% of the users are between the ages of 23-30 and only 6% are over 40 years of age.
Is meth a drug that is used by teenagers?
As with most drugs, over time, more and younger teenagers are experimenting and using meth. One national survey showed that the number of senior high school meth users doubled between 1990 and 1996. The teenage percentage has contiued to rise.
Do meth users combine the use of meth with other drugs?
It is common for someone using meth to feel overly anxious when they have taken enough of the drug to give them the high that they are seeking. To counteract this jittery feeling, meth users will many times abuse alcohol or other downers, or tranquilizers such as Valium and Xanax.
Physical Effects of Using Meth
What are the physical effects of using meth?
Meth use has many side-effects that are dangerous, but the most common problems are those related to the cardiovascular system. Meth use commonly causes chest pains that originate from the rapid changes in blood pressure, which can lead to heart attacks and death in some cases. Meth also increases the heartbeat and can cause irreversible damage to one's vascular system. There are also the problems associate with the lowering of one's calcium levels which causes "meth-mouth" and other tooth decay as well as jaw problems from the clenching of ones teeth while their bodies are speeding from the influence of this drug.
Psychological dependence on this drug causes meth users to ignore their better judgments and continue using the drug even though they find obvious physical problems arising from its use. Because of the appetite depressant effect, meth users are notorious for going many days without eating, often not sleeping either, and consequently losing body mass, including from essential organs.
It is important to note that street meth may have many unknown chemical toxins in it was well as meth. It is impossible to predict what chemicals are being used in the makeshift labs that are producing meth and all of these solvents are toxic to the body. Some of the poisons will cause immediate problems and other will lodge in the body until the concentration reaches a point where it will cause severe problems. Meth use will tax the liver in its attempt to rid the body of these toxins, which causes fatty-liver syndrome and can lead to hepatic failure.
There are other long-term effects that are worthy of attention. Long-term use of meth will cause kidney failure as well as the above-mentioned liver problems. Again, this is due to the toxic nature of this drug. There are also problems associated with the lungs, brain damage blood clots as well as other psychological problems such as depression, hallucinations and negative behavior changes, and extreme weight loss equivalent to starvation.
Chronic meth use is associated with malnutrition, problems with one's immune system as well as a rapid aging that has multiple problems related to ones skin. It is safe to say that meth use is one of the most hazardous drugs in terms of one's health and physical well being.
Methamphetamine Treatment
Where can I get meth treatment?
The Narconon drug rehabilitation program has helped many people get off methamphetamine. Some other drug rehab programs use substitute drugs to help a person get off meth and other drugs, but the Narconon drug rehab program doesn't use drugs to get a person off drugs, but has instead a strong, healthful nutritional component. The Narconon drug rehab program is unique and has a strong success rate. The drug detox procedure that the Narconon program provides helps to release all of the chemicals from the body that the user has collected over the years. These chemicals are stored in the fatty tissue of the body and, unless gotten rid of, can trigger drug cravings and hold a person back in life.
For more information about the Narconon drug rehabilitation program, contact a Narconon representative.
Recommend this page on Google:
Like this page on Facebook:
|
The Art of Economics
Only available on StudyMode
• Download(s) : 132
• Published : January 9, 2013
Open Document
Text Preview
Invisible Man by Ralph Ellison
Chapter 5 Study Guide
Plot Summary:
The narrator goes to chapel where all of the students are supposed to go and where Dr. Bledsoe is at. Dr. Bledsoe along with only one other man is the only black people standing in front of the congregation. The narrator takes notice that Dr. Bledsoe has no trouble touching a white man and he remembers how difficult it was for him to lay his hands on Mr. Norton. The other black man is Reverend Homer A. Barbee and he gives a sermon about the biography of the school’s founder. The school’s founder died, but Barbee assures them that his presence is in the school. The narrator starts becoming even more depressed about possibly being expelled from school because he thinks he can see Barbee’s vision for the school. The narrator hears a song that reminds him of his parents called “Swing Low Sweet Charriot” The narrator leaves chapel before it is over.
The narrator goes to the administration building to have his meeting with Dr. Bledsoe. The narrator is worried that Barbee’s may have influenced Dr. Bledsoe to be tougher on him. Reference Points:
The way the Invisible Man describes the chapel’s eaves is like a highly intelligent and curious person. A curious person because, obviously, the Invisible Man reflects upon various everyday things and looks at them from a new perspective. An intelligent person because the things he compares the “sweeping eaves” to is not something most people think of on a daily basis. In this quote, the Invisible Man also tells us that he sees a difference between nature and man.
“And there on the platform I too had stridden and debated, a student leader directing my voice, at the highest beams and farthest rafters, ringing them, the accents staccato upon...
tracking img
|
The Plausibility of the Setting of The Handmaid’s Tale
The Handmaid’s Tale is another interesting Post Apocalyptic/Dystopian novel. In this book by Margaret Atwood, the setting is a combination of geographical and societal elements. In this Post America society, Gilead, as the main character Offred refers to it, Atwood hints that the society is set up in a town called Cambridge, Massachusetts.
Now, this geographical region could easily be discarded as just another place, where this society was chosen to set up. However, this New England area was where the Puritans first began their new life in the new world, where they were free form religious persecution. The Puritans were a very religiously devout Christians, who followed the bible to the letter (of their choosing that is), and held grave superstitions against anyone who did not believe in their ways.
Similarly, Gilead is set up as a religious society where the only religion is Christianity, and serves to solve the problem of infertility. The group that leads the society are called The Eyes, who are led by a older Christian men, most of which who hold the title of Commander. These men were the ones that tore down American society as it was in place during the 1980s. During the 1980s, the topic of abortion was being debated across the country, as people began to question whether there was a true separation between church and state in the American government, and whether a woman was truly responsible for her own body. Atwood takes the religious extremist idea form this time period, and puts it in place as the laws of Gilead, where each woman has their own role that is assigned to them. They are either: a wife, a Martha (a servant), or a Handmaid. The Handmaids are the true representation of these extreme religious ideas, as they are the women that birth the children. They are covered from head to toe, and are required to birth babies, to keep the society going. Abortion is illegal in this society, and so is the concept of men being barren. Outside this area however, Offred discusses how there is a war raging on between rival factions of Christianity. This is another interesting idea, as religious wars have been going on since the beginning of religion.
So, when studying this society, many people question whether this is actually a possible outcome for a Post American society. Looking back at when this novel was published in the 1980s, it is slightly possible that extremist ideas of religion, and women could take root within a new society. It’s also even possible for a religious extremist group of people to take over a nation, and completely convert it into a radically secular society. There have been examples of this throughout history, more recently, the take over of the Taliban in Middle Eastern countries such as Afghanistan. However, I think Atwood for got to take in America’s standing within the world. She solely focuses on the internal aspects of American society, and not the international view of it. Today, and in the 1980s, there are a lot of countries, and organizations (legal and illegal), that depend upon American society remaining the way it is and was. IF America were to have a religious apocalypse, more likely than not, many organizations, and other countries would get involved to stabilize America, or even try and take over it.
It’s not exactly probable that a religiously extremist group, be it part of the government or an independent organization, would simply murder every single member of the government and simply assume power. First off, that’s a lot of people to kill, and in a society that’s already depleting in numbers due to infertility, the motive for killing that many people doesn’t quite seem rational. Second, what country in the world during the late 1900s, wouldn’t try and involve themselves in America’s problems. After WWII, American had become a world power, an international police. The entire world had something invested in America policing other countries, or staying out of other countries.
Looking at the setting for this novel, it’s pretty clear that Atwood had interesting and partly plausible ideas of religious extremism taking root within American society, and influencing governmental decisions and laws. However, Atwood fails to discuss the influences of other countries and organizations that depend upon American society remaining the way it was during the 1980s. It’s just not entirely plausible that every country in the world would simply sit back and watch as America becomes a war zone for rival factions Christianity, and holds a society where the only objective is procreation, and controlling of women. There are too many economic, and political factors that are just not considered.
Conflict ~ The Essence of Plot (in The Road)
The essence of plot is conflict, and a story always has a central conflict, it can be apparent, or it can be hidden under layers of metaphors, and themes. Cormac McCarthy makes the conflict blatant yet with layers of debt to it in the novel, “The Road”. Before I begin describing the conflict and how it develops the plot, let’s take a step back to understand the basic premise of the novel. The storyline takes place in post apocalyptic America, where all living things have died off. It isn’t clear how everything died, but McCarthy hints that there was fire and ash involved in the crumble of American society. When reading this novel, the reader can immediately tell that the main characters, “The Man”, and “The Boy”, are fighting for survival, but it’s what they’re fighting against that is not quite stated. McCarthy presents the four major types of conflict: man vs. nature, man vs. man, and man vs. himself, man vs. (in this case, lack of) society.
Man vs. Nature:
The setting of the novel takes place in a post-apocalyptic America, where the only living beings are humans. Now, rationally speaking, if there was absolutely nothing alive except for us humans, there’s probably a great chance that we’d eat ourselves out of house and home, quite literally. “The Road” definitely displays this idea, as the Man and The Boy scour the barren land for leftovers of preserved/processed food, which is the only thing that is left for consumption (except for other humans…). This depletion of resources has the man and the boy searching for ways to make it through to the next day, as they ration their food.
In addition to this, the boy and the man face the brutal weather, and conditions that are presented by nature. You’d think that if everything was dead, that Mother Nature would probably let up a bit, the only thing it is torturing are humans, and honestly, they have enough problems as it is in this book. Nevertheless, the man and the boy continuously have to move south (because if climate patterns haven’t changed then why should migration for warmth?) to make it through the harsh winter, with what resources they have, and can find.
Man vs. Lack of Society?
The fourth form of conflict is man vs. society, but I think in this case the fact that there is no society is a conflict in itself. The cause of the lack of society is obviously shown through the conflict of nature, but it is the fact that there is no society, no form of regrouping and no way to rebuild that causes this eternal chaos that is presented in the form of thievery and cannibalism. The lack of society promotes natural selection, and pushes this idea that without society man is left to fend for himself.
The lack of society also presents paternal issues between the man and the boy, the man continues to long for the way life was with society. The fact that the boy was born post apocalypse causes the two to have a major cultural barrier between them. Their relationship becomes strained as the novel progresses, as the man tries to show the boy bits and pieces of his old life through objects they find as they scavenge for resources. This cultural barriers puts both boy and man at an emotional distance, as they struggle to find ways to survive, without much comfort
Man vs. Man:
Now, there are probably a lot of people out there cringing and being disgusted by the idea of cannibalism, some may even believe that it’s only a theory and not an actual practice done since the beginning of humans. Is it horrible? Yes. Immoral? Possibly. But is it completely possible that when there is nothing left to eat, and there is no hope for man kind, would people actually turn to this practice? The answer is yes. There are many arguments as to why and why not people would and should not become cannibals, but the fact of the matter is that in the book McCarthy shows that people can sometimes be desperate to survive, even though death is an impending doom. As the Man and the boy move towards the south they encounter multiple bands of cannibals, and they’re are labeled as evil by the man and the boy. McCarthy presents Darwin’s law of natural selection, and it is completely applicable to this novel, as both the man and the boy struggle to not be hunted down by the cannibals, while finding ways to survive on their own without eating human flesh. It’s killed or be killed in this novel, and the man makes difficult decisions regarding his and the boy’s safety when they face these people.
Man vs. Self
This last form of conflict is shown in multiple ways through the characterization of the man. The man’s main goal is to make sure the boy survives, it’s the only thing that is driving him to keep going. However, there are moments in the novel where the man has to make some tough decisions, sometimes they go unapproved by the boy such as stealing from other people, and killing those out to kill them or harm them.
But though these actions may sometimes weigh on his or the boy’s conscious, McCarthy presents another form of internal conflict, that has the reader questioning whether surviving is living, and whether one can live where there is no hope and no motivation. A flash is shown back of the man and his wife, just after the apocalypse. The boy had been born, and the wife is telling the man how he should have just put them all out of their misery with the bullets in their lone pistol. This pivotal moment causes the audience to realize that the man has another decision other than to survive. We see the man wrestle with this idea, especially during moments when it looks like either the man or the boy are close to death. The man always wonders whether it is better for them to be dead than have their lives at the hands of others. It’s a battle for control over one’s life, in a setting where there is no more control. As the story progresses, McCarthy prompts the reader to question whether the man is simply surviving, and question whether there is anything left to live for in a barren world.
Tying in of Climax, and Resolution with Conflict
The conflict never really dies out in the novel, the climax can be argued takes place when the man dies, leaving the boy to fend for himself, no longer having someone to look after him, and guide him, but having the option to think for himself. However, this does not change the fact that the conflict of survival still remains, until the resolution where the boy meets a family who ask him to come with them, to somehow survive together. The ending is ambiguous, the song still remains the same, the essence of survival is still embedded within the open ending.
|
Cramér–von Mises criterion
From Wikipedia, the free encyclopedia
Jump to: navigation, search
In statistics the Cramér–von Mises criterion is a criterion used for judging the goodness of fit of a cumulative distribution function compared to a given empirical distribution function , or for comparing two empirical distributions. It is also used as a part of other algorithms, such as minimum distance estimation. It is defined as
In one-sample applications is the theoretical distribution and is the empirically observed distribution. Alternatively the two distributions can both be empirically estimated ones; this is called the two-sample case.
The criterion is named after Harald Cramér and Richard Edler von Mises who first proposed it in 1928–1930.[1][2] The generalization to two samples is due to Anderson.[3]
The Cramér–von Mises test is an alternative to the Kolmogorov–Smirnov test.
Cramér–von Mises test (one sample)[edit]
Let be the observed values, in increasing order. Then the statistic is[3]:1153[4]
If this value is larger than the tabulated value, then the hypothesis that the data come from the distribution can be rejected.
Watson test[edit]
A modified version of the Cramér–von Mises test is the Watson test[5] which uses the statistic U2, where[4]
Cramér–von Mises test (two samples)[edit]
Let and be the observed values in the first and second sample respectively, in increasing order. Let be the ranks of the x's in the combined sample, and let be the ranks of the y's in the combined sample. Anderson[3]:1149 shows that
where U is defined as
If the value of T is larger than the tabulated values,[3]:1154–1159 the hypothesis that the two samples come from the same distribution can be rejected. (Some books[specify] give critical values for U, which is more convenient, as it avoids the need to compute T via the expression above. The conclusion will be the same).
The above assumes there are no duplicates in the , , and sequences. So is unique, and its rank is in the sorted list . If there are duplicates, and through are a run of identical values in the sorted list, then one common approach is the midrank[6] method: assign each duplicate a "rank" of . In the above equations, in the expressions and , duplicates can modify all four variables , , , and .
1. ^ Cramér, H. (1928). "On the Composition of Elementary Errors". Scandinavian Actuarial Journal. 1928 (1): 13–74. doi:10.1080/03461238.1928.10416862.
2. ^ von Mises, R. E. (1928). Wahrscheinlichkeit, Statistik und Wahrheit. Julius Springer.
3. ^ a b c d Anderson, T. W. (1962). "On the Distribution of the Two-Sample Cramer–von Mises Criterion" (PDF). Annals of Mathematical Statistics. Institute of Mathematical Statistics. 33 (3): 1148–1159. ISSN 0003-4851. doi:10.1214/aoms/1177704477. Retrieved June 12, 2009.
4. ^ a b Pearson, E.S., Hartley, H.O. (1972) Biometrika Tables for Statisticians, Volume 2, CUP. ISBN 0-521-06937-8 (page 118 and Table 54)
5. ^ Watson, G.S. (1961) "Goodness-Of-Fit Tests on a Circle", Biometrika, 48 (1/2), 109-114 JSTOR 2333135
Further reading[edit]
External links[edit]
|
What Is a Quad Band Cell Phone?
by Maxwell Payne
A quad band cell phone is a cell phone that can operate on the four major GSM frequencies used by various cell phone service provider networks. A quad band cell phone is beneficial for those who travel often and may need to access cell phone service even when outside their service provider's area.
Many companies that produce cell phones produce quad band cell phones, allowing them to sell their handsets to cell phone service providers across the globe.
There are many types and brands of quad band cell phones on the market. Phones ranging from basic cell phones to fully functional smartphones may use quad band technology. The types of frequency bands that a quad band phone can access are 850 and 1900 MHz bands which are commonly used in the Americas. 900 and 1800 MHz bands are commonly used in other parts of the world such as Asia and Europe. A quad band phone can potentially function on all of these bands.
Quad band cell phones have a distinct advantage over dual band and tri band cell phones. Since there are four major GSM frequencies used by cell phone service providers across the globe, a dual or tri band cell phone might only work on two or three of those bands. The quad band phone has the ability to work on any of the four frequency bands as long as the cell phone service provider has an agreement with the network owner of the other networks.
Owning a quad band phone does not ensure that the user will automatically be able to use the phone on any band in any part of the world. The user's cell phone service provider must have established agreements with owners of the cell phone networks in other countries so that the quad band phone can access and use the network. Therefore the quad band cell phone's ability to use the different networks is dependent upon whether or not an agreement exists between the phone's primary service provider and the network that the phone is trying to access.
Most quad band cell phones will automatically switch frequency bands and try to connect to the nearest available GSM network upon being turned on. Users of quad band cell phones should consult with their cell phone service provider before traveling to ensure that the phone will function on another network where travel is occuring. Some features of one cell phone service provider (such as Internet access) might not function the same on a different provider's network. Also additional fees and costs might be associated with using a cell phone on a different network. It is always a good idea to verify that cell phone service will be available where you are traveling and to clarify any additional fees or costs associated with accessing another company's cell phone network.
Fun Fact
Users are often surprised when they turn on their quad band cell phone after arriving in a new country and find that their screen displays a series of numbers and letters or the name of a local cell phone service provider and not their cell phone service provider. This is because the quad band phone has switched bands and located the nearest available network that it has access to.
About the Author
|
August 29 2014 Friday at 08:36 AM
Bed Bugs
Identifying Bed Bugs
Bed bugs have an oval body and a short, broad head. The body as a whole is broad and flat. Unfed adults are around 6 to 10 mm long, brown and wingless. After feeding, they swell slightly in size and darken to a blood-red colour. the nymphs are shaped like the adults, but are yellow-white in colour.
Bed bugs are also known by several names: wall louse, house bug, mahogany flat, red coat, crimson ramblers as well as others.
Adults usually live for around 10 months, but can live for a year or more. In a home, where the environment is conductive to their reproduction (their ideal breeding temperature is between 21 and 28 degrees Celsius), bed bugs can breed year round. Bed bugs are wingless and cannot fly or jump, but are able to enter into extremely small locations in the home because of their flattened bodies.
Bed bugs can live for several weeks to several months without feeding, depending on the temperature. They can typically go without feeding for 80 to 140 days; older bed bugs can go without feeding longer than younger ones. Adults have been known to survive for as long as 550 days (over a year and a half!) without feeding.
What do bed bug bites look like?
Four types of skin rashes have been described:
1. The most common rash is made up of localized red and itchy flat lesions. The classic bed bug bites could be presented in a linear fashion in a group of three, which is called "breakfast, lunch and dinner".
2. Small raised red swelling lesions are also common.
4. People with high sensitivity to bed bug saliva may develop a lump filled with blood or fluid.
Bed bug bites most commonly occur on exposed areas of the body, including the face, neck, hands, arms, lower legs, or all over the body.
How can you treat bed bug bites?
What do bed bugs feed on?
Can I get sick from bed bugs?
There are no known cases of infectious disease transmitted by bed bug bites. Most people are not aware that they have been bitten, however some people are more sensitive to the bite and may have a localized reaction. Scratching the bitten areas can lead to infection.
Why Bed Bugs are a Problem
One species of bed bugs feed primarily on humans, but there are other species that feed on other mammals and on birds.
How do bed bugs get into my home?
Bed bugs are often carried into a home on objects such as furniture, clothing and luggage. If you think you have a bed bug problem, check for live bed bugs or shells in the following areas:
- Seams, creases, tufts and folds of mattresses and box springs
- Cracks in the bed frame and head board
- Under chairs, couches, beds and dust covers
- Between the cushions of couches and chairs
- Under area rugs and the edges of carpets
- Between the folds of curtains
- In drawers
- Behind baseboards and around window and door casings
- Behind electrical plates and under loose wallpaper, paintings and posters
- In cracks in plaster
- In telephones, radios and clocks
Bed bugs can also travel from apartment to apartment along pipes, electrical wiring and other openings. If the infestation is heavy, a sweet smell may be noticed in the room.
What You Can Do Around Your Home
The best method to deal with bed bugs is to employ an integrated approach that combines a variety of techniques and the use of a chemical insecticide, such as Bug-Tek, that poses the least risk to human health and the environment.
Bed bugs are small and can hide in a myriad of places - under wallpaper, behind picture frames, in electrical outlets, inside box springs, in mattress pads, in night tables, etc... You must be very thorough in order to properly address bed bug infestations. As bed bugs can travel up to 30 meters and can be transported in clothing, luggage or other household items, you may have to treat nearby rooms to prevent the infestation from continuing.
- Carefully examine all night tables, baseboards, dressers, headboards (especially padded ones), electrical outlets, any items stored near or under the bed, any nearby carpeting or rugs, picture frames, switch plates, inside clocks, phones and televisions and smoke detectors - in short, anything and everything that is in the room where the infestation as been noted. Upholstered chairs and sofas can also harbour bed bugs and should be treated with careful vacuuming and laundering of all possible parts (cushions, slipcovers, skirts, etc..)
- Treat the infestation with Bug-Tek insecticide. Bug-Tek is totally safe and non-toxic to humans and warm-blooded pets, such as dogs and cats, but is very effective in killing bed bugs.
Bed Bug Information for Landlords and Property Managers
Multi-unit dwellings, including hotels, apartments, hostels, shelters, student residences and rooming houses, are high-risk locations for bed bug infestations. The best method to deal with bed bugs is to employ an integrated approach that combines a variety of techniques and the use of a chemical insecticide, such as Bug-Tek that poses the least risk to human health and the environment.
Collaboration between tenants and landlords is necessary to eliminate bed bug infestations. The following steps are recommended for landlords and property managers dealing with bed bug infestations.
1. Prevention: Seal cracks and crevices between baseboards, on wood bed frames, floors and walls with caulking. Repair or remove peeling wallpaper, tighten loose light switch covers, and seal any openings where pipes, wires or other utilities come into the home. It is also important to pay special attention to walls that are shared between apartments.
2. Respond to tenant's complaints about bed bugs and conduct proper inspections. Consult with local Public Health authorities to confirm bed bug infestations.
3. Review the bed bug management process with tenant(s) in bed bug affected unit(s) to ensure understanding and adherence to the steps taken.
5. Treat the infestation with Bug-Tek or provide sufficient insecticide for the tenant to do the treatment himself. Bug-Tek is totally safe and non-toxic to humans and warm-blooded pets, such as cats and dogs, but is very effective in killing bed bugs.
6. Ensure that an inspection by the property manager/landlord is carried out following treatment to assess the treatment's effectiveness and determine if more spray is needed. Often more than one treatment is required.
7. Furniture put out from the infested units should be removed as soon as possible and dismantled so that it is not picked up by someone else.
Controlling Breeding Sites
Smaller items that cannot be laundered can sometimes be treated through heating (temperatures great than 50 degrees Celsius) or freezing. These temperatures must be maintained for a prolonged period of time (e.g., 2 days of cold exposure at 0 degrees Celsius) to ensure that the bed bugs are killed.
Vacuuming can be helpful in removing bugs and eggs from carpet, mattresses, walls and other surfaces. It is very important to pay close attention to seams, tufts and edges of mattresses and box springs, and the other edge of wall-to-wall carpeting. Steam cleaning carpeting can also be effective in killing bugs and eggs not picked up by regular vacuuming.
Chemical Control Methods
Bug-Tek is a great solution for bed bug infestations. It effectively kills bed bugs on contact, yet is totally safe for indoor use.
- Odourless
- Non-staining
- Safe for humans
- Safe for warm-blooded pets (dogs, cats, etc...)
- Residually effective for up to 6 weeks
- May be applied by the property owner
- Ready to use, no mixing or dilution required
- Water-based
- Biodegradable
Application Directions
- Use the convenient trigger sprayer that comes with the bottle, or decant Bug-Tek into a commercial sprayer if treating larger areas.
- Liberally spray all areas where bugs may reside or travel. See detailed explanations of areas as described above.
- Allow to dry. It is safe to touch, walk and sleep on treated areas.
- Repeat treatment in two weeks.
- Repeat treatment in another two weeks. Since bed bugs can remain in one place without feeding for very long periods, they may not be affected by the initial spraying if they never come into contact with the insecticide.
- After the third treatment, the retreatment cycle may be extended to four weeks.
Note that bed bug infestations can be challenging to treat, and repeat applications are often required. Between applications of Bug-Tek, use physical pest management techniques to minimize ongoing or future infestations.
|
Should the United States Leave the United Nations?
The United States joined the United Nations during its first moments of creation in 1945. To this day, the United States is the largest contributor, giving over $650 million every year. Almost since the beginning, The United Nations has been a failed international organization. Violations have occurred time and time again and have gone without any consequences. The Trump administration believes that the United Nations Human Rights Council shows anti-Israel bias and ignores violations for certain countries. Nikki Haley, the U.S. ambassador to the United Nations, states that although they aren’t thinking of leaving the UN, they do see room for “significant strengthening.” Haley took to social media to say, “US will not sit quietly while this body, supposedly dedicated to human rights, continues to damage the cause of human rights.” During Haley’s time in Geneva, she addressed the board directly, calling out the world’s “worst human right abusers”, including Venezuela, Cuba, China, Burundi and Saudi Arabia.
There are various instances in which the United Nations has let actions pass without consequences. For example, the UN previously appointed Saudi Arabia to chair the United Nations Human Rights Council. This is a country that has one of the worst records of women rights. Women can’t drive, leave their house without a male, take part in civic engagement, etc. Atheists, homosexuals, etc. are killed. A more recent issue was the UN’s lack of action on the genocide in Syria. It has been almost 5 years and Syrian dictator Bashar al-Assad has killed civilians nonstop. In addition to, there has been no action taken against ISIS. The UN has failed multiple times to investigate ISIS’ crimes, not gathering evidence from mass grave sites. The UK Security Council set an investigation into ISIS’ crimes in Iraq, clearly needing the help of the UN. The Iraqi government has expressed support and can send a letter to trigger a UN vote on the issue. However, the letter is not needed to vote and the UK’s set deadlines have passed without a response from the UN. Whether or not the United States should leave the UN, the UN does need to fortify its rules and remember its principles and ambitions of world peace.
The Aftermath of The Inauguration
On Friday, January 20, 2017 the 45th President was brought into office at the United States Presidential Inauguration. Donald Trump made a vow to put “America first” and take power out of the hands of Washington elite. The gender gap in the election was huge: among men, Trump won to Clinton 53 percent to 41 percent; however, among women, Clinton won 54 percent to 42 percent. In contrast, among white women. Trump beat Clinton 53 percent to 43 percent. The Women’s March held millions upon millions of protestors in numerous cities, including: Washington D.C. , New York City, Boston, Chicago, Miami, Oakland, St. Louis, Los Angeles, Phoenix, Denver, and Seattle. Many protestors held signs going against the plans of the new administration and chanted, “Welcome to your first day, we will never go away.” Trump tweeted on the issue, “Watched protests yesterday but was under the impression that we just had an election! Why didn’t these people vote? Celebs hurt cause badly.” Shortly after, he tweeted, “Peaceful protests are a hallmark of our democracy. Even if I don’t always agree, I recognize the rights of people to express their views.”Women's March
|
Loop termination confusion
So I have this code to enter a series of integers to a sequence but I would like to know how can I terminate the while loop just by entering no values and pressing enter. I know that I can put a condition like terminate the loop if the x integer value is 0.
public static void main(String[] args) {
SimpleReader in = new SimpleReader1L();
SimpleWriter out = new SimpleWriter1L();
Sequence<Integer> s = new Sequence1L<>();
Sequence<Integer> temp = s.newInstance();
System.out.println("Enter ");
int x = in.nextInteger();
int i = 0;
while (in.nextLine()) {
s.add(i, x);
x = in.nextInteger();
You need to change the way you're reading input if you want to achieve what you described - enter nothing.
First having a while (in.nextLine()) eats an extra line from your input. So half of your input lines are just lost.
I'd suggest reading the line like String line = in.nextLine(). Then something like:
if (line.equals("")) break;
int x = Integer.parseInt(line);
Sorry, not doing java lately to give you the whole loop. But I think you should get the idea.
|
What Is a Cerebrovascular Accident (CVA)?
A cerebrovascular accident (CVA) is the medical term used to describe all of the different types of stroke. A cerebrovascular accident means that the blood vessels in the brain have had some type of malfunction.
What is cerebrovascular?
The cerebrum is the scientific name for the brain. Vascular is a term that refers to all blood vessels, including arteries, veins, and capillaries.
The arteries that bring blood to the brain from the heart and the veins that carry blood away from the brain towards the heart are cerebral vessels.
Tiny blood vessels called capillaries also supply the brain with blood and remove waste materials from the brain.
What is a cerebrovascular accident?
A cerebrovascular accident is often called a CVA. It means that one of the cerebral vessels has malfunctioned in some way. If an artery is blocked, then an ischemic event occurs. If an artery leaks or ruptures, then blood damages the brain. This is called a hemorrhage.
Usually, veins and capillaries do not cause problems in the brain, but if a vein or capillary leaks, then that also causes a cerebrovascular accident.
What are the effects of a cerebrovascular accident?
The cerebral vessels are nicely routed to specific territories throughout the brain. Each cerebral vessel corresponds to a certain region of the brain. Each region of the brain corresponds to a certain action of the body or mind- such as thinking, sensing the environment or moving a part of the body.
There are immediate effects of a cerebrovascular accident and there are often also long term effects. The effects of a cerebrovascular accident are caused by an injury to the portion of the brain that is damaged by bleeding or lack of blood. In turn, the bodily function controlled by the damaged area of the brain becomes impaired.
Can you have a temporary CVA?
A cerebrovascular accident may cause temporary, reversible effects if the blood vessel is only blocked for a few seconds. This is called a transient ischemic attack (TIA.) A TIA is often a warning sign of a stroke.
A stroke occurs when a cerebrovascular accident lasts for more than a few seconds- causing permanent brain damage. To learn more about how a CVA causes brain damage, see this article.
Types of cerebrovascular accidents
A cerebrovascular accident is a medically descriptive name for a stroke, and it includes all of the different kinds of strokes.
The type of blood vessel problem that caused the cerebrovascular accident, such as ischemic (blocked blood vessel) or hemorrhagic (bleeding blood vessel), may identify a CVA.
An ischemic stroke can be named for the type of blood clot that caused it, such as an embolic clot (a blood clot that traveled from somewhere in the body to the brain) or a thrombotic clot (a blood clot forming within a blood vessel and blocking it.).
A cerebrovascular accident is sometimes named after the malfunctioning blood vessel, such as a middle cerebral artery stroke. Or, a cerebrovascular accident can be labeled by the location in the brain that is damaged, such as a brainstem stroke.
Sometimes, cerebrovascular accidents are given a special name to describe an unusual mix of signs and symptoms, such as Wallenberg syndrome or locked in syndrome.
CVA Prevention
A cerebrovascular accident is always an accident- meaning that it is an unwanted, unpredicted and sudden event. But, there are some clues that you might be at risk of having a CVA. Sometimes, people have strange experiences that seem like premonitions of a CVA. There are also some predictors of having a CVA. And, most importantly, it is worthwhile to be familiar with the most common signs of a CVA, because recognizing them can save your life or someone else’s.
Continue Reading
|
John Locke (1632-1704) was an English philosopher who is considered to be one of the first philosophers of the Enlightenment and the father of classical liberalism. In his major work Two Treatises of Government Locke rejects the idea of the divine right of kings, supports the idea of natural rights (especially of property), and argues for a limited constitutional government which would protect individual rights.
We see in commons, which remain so by compact, that it is the taking any part of what is common, and removing it out of the state nature leaves it in, which begins the property; without which the common is of no use. And the taking of this or that part does not depend on the express consent of all the commoners. Thus the grass my horse has bit; the turfs my servant has cut; and the ore I have digged in any place, where I have a right to them in common with others; become my property, without the assignation or consent of any body. The labour that was mine, removing them out of that common state they were in, hath fixed my property in them.
|
Wednesday, February 23, 2011
Helping the Auditory Defensive Child
Wednesday, February 16, 2011
Auditory Defensiveness, the Hidden Behavior Problem
If your child is having tremendous difficulty comporting himself at school, is aggressive towards his classmates, can't participate in noisy classrooms, either going off by himself or acting out, and in general behaves in ways that are difficult to comprehend when he is in a noisy environment, the chances are very high that his ears are hurting him and that he has no way to let you know that he is suffering.
Consider these scenarios, taken from my files:
1. An exceptionally beautiful, creative little girl, who should be the most popular girl in school, but instead tends to keep to herself. She does not interact or socialize with her classmates during lunch, gym, or recess. While the other children are happily and noisily chatting and laughing with each other as they sit around a table and draw, cut, and color, she sits a bit off to the side, head down, and does what is required of her, but does not join in the conversation. She refuses to accept any party or play date invitations on the weekends, preferring to spend her time alone in her bedroom with the door closed.
2. A bright, handsome little boy who is labeled emotionally disturbed. He cannot comport himself at school or in public. He is verbally and physically aggressive, sarcastic, oppositional, and constantly spoiling for a fight. His OT sessions the previous year with a different therapist were a disaster. The therapist had to spend most of his treatment time trying to help him contain himself in the busy clinic.
{This year, he comes to OT at a time when he and the therapist work together alone. He is affable, cooperative, and works hard.}
3. A little boy who is the bane of his gym teacher's life. Normally, he is a total delight to be around, charming, sweet, and funny. He is a polite child who does well in school, but his behavior in her class has everyone in an uproar. He interrupts her frequently, is oppositional and rude, disrupts the games, and slugs his classmates.
4. A little boy who always enters the gym {at 3:30, the busiest time of day in the clinic} making a face like an angry grizzly bear. He won't make eye contact or say hello. He does not respond to the therapist when she calls his name, but wanders around sulkily, arms crossed over his chest, picking up balls and toys and throwing them aimlessly.
When he does allow himself to interact, he says an emphatic no to every activity suggestion proposed to him, shaking his head and sticking out his lower lip. At school, he is constantly getting in trouble for throwing things, talking out of turn, and being silly, especially during arts and crafts, when the teachers play music in the background, or when the children work in groups, building block structures together on a hard linoleum floor.
{A few weeks ago, he came at a time when he and the therapist were working alone. He was cooperative, playful, motivated, and full of ideas about how to challenge himself.}
5. A little boy is observed at school one day in his noisy, busy classroom. The children work on the activities of their choice in small groups, sitting around small tables. Instead of choosing an activity and being seated, he runs around the room, unable to sit still, follow directions, or to engage in purposeful activity. He is constantly reprimanded for knocking the other children's projects off the tables.
{At the OT's suggestion, he was switched to a smaller, much more structured, quieter classroom, and immediately began to demonstrate goal directed behaviors.}
6. A little girl whose parents experience her at home as supremely creative and intelligent, unusually physically coordinated and daring, feisty, funny, witty, and loving, comes home with this report card from her preschool: Below average performance on most academic subjects. Does not appear to understand what the teacher is saying and is frequently observed watching the other children for cues. Cannot play games like Simon Says. Sits by herself most of the time, chewing on her hair or on a sleeve. Does not initiate any activities in a school where children are encouraged to be self directed.
7. A charming, affectionate little boy who is the sweetest, most loving big brother imaginable to his baby sister, observed one rainy morning during indoor recess: He runs aimlessly around and around the periphery of the room. He then picks up and starts throwing a Frisbee, aiming it directly at the backs of his classmates' heads. When the Frisbee is confiscated, he picks up a plastic disc, runs up to his classmates, and whips the disc at them from behind, causing them to cry out in surprise and pain. When this is taken away, he returns to running around and around in circles, until he finds a ball and begins to throw it into the center of the room, again carefully aiming it at his classmates' backs.
His teachers report that when the children are lining up in the hallway, he will often suddenly tackle the child next to him and slam him up against a wall.
8. A 43 year old man who suffers from severe anxiety. He avoids parties and restaurants because it's too difficult for him to make conversation. His wife reports that when they are sitting quietly at home, reading or watching television, he frequently turns to her and says, "What's that noise?"
9. A very sweet, smart, friendly little boy who is reported as having a very flat facial expression at school, and who can never get his work done in class. His teacher's policy is to allow the children to be self directed and do what they need and want to do during class time, as long they get their lessons learned and their assignments finished, so the classroom is noisy and busy during writing time.
10. A bright, charming little boy who flat out refuses to eat lunch in the school cafeteria, no matter how hungry he may be. His teachers think he is a bit dim, because he rarely participates in class. He is consistently pale, mopey, and red eyed at the end of the day.
What do all of these people have in common? They are all extremely sensitive to noise, and they can't cope in noisy environments. Some of them respond by tuning out and shutting down, and some of them respond by acting out and doing anything they can think of in an effort to be removed.
If your child becomes disorganized, disruptive, oppositional, aggressive, or highly anxious in chaotic environments, becomes overwrought at live performances, hums, covers his ears, grits or grinds his teeth, or chews on anything he can get his hands on, or or gets distracted by every voice and sound that crosses his path, chances are good that his ears are not filtering and dampening noise effectively and that he is confused and even in pain as a result.
In my next post, I'll talk about auditory defensiveness, some of its causes, and what sensory based OT has to offer to an auditory defensive child -- or grownup.
Wednesday, February 9, 2011
Managing Circle Time
Circle time can be extra challenging for children who have a hard time in school. Sitting on the floor with backs unsupported is very difficult for low tone kids.
There should be a variety of sitting options for circle time. Since no one wants to be singled out, I suggest that there be a few chairs around the perimeter of the circle at the beginning of the school year. The teacher can invite the children to try both sitting in them and sitting on the floor, and then deciding which they prefer. Eventually, the children who need them will go on using the chairs, and the rest of the children will choose the floor, and no one will notice or care who sits where because they have all tried all of the options and made their choices. If this is truly not possible, then having the child sit with his back against a solid surface, like the wall, is the best choice. Or perhaps a few floor chairs could be kept in a cubby and made available.
A child who has difficulty self regulating, or who tends lash out when others are in his personal space, will do best sitting on a piece of furniture, which defines personal space, next to an adult. These children will be happiest with their backs covered. A chair is good. Placing the chair up against a solid wall would be better. Sitting on a chair in a niched corner would be best. Something to occupy the child discreetly and quietly would be very helpful here.
Do you tend to sit in meetings and utilize fidget toys to keep yourself present? Do you perhaps bend paper clips, doodle, roll up the paper from the straw in your drink, fold dollar bills into origami, spill sugar on the table and draw in it with a finger, or play with a rubber band? Or do you take a craft project, like knitting or needlepoint, with you when you have to sit for a long time? It helps, doesn't it? Children need to be able to do this as well.
I always send a little bag of fidget toys, generally a collection of little stretchy animals, in to the classrooms of the children I treat for the teacher to hand out as appropriate. One teacher told me that when she notices my friend starting to zone out during circle time, a little stretchy frog discreetly pressed into her hand is enough to keep her focused and present. A child who chews a lot could benefit from a plastic drinking straw or some fishtank tubing to chomp on to keep steady. {Chewing is a sign that the child either needs to move his body or that the room is too loud.} What would be really wonderful is if the children had little sewing or needlework projects they could work on while they were sitting. Do you listen better with busy hands? I know I do. I always bring my crocheting to board meetings.
Teachers, if you are consistently having a hard time keeping the children in your classroom engaged during circle time, it's because you are expecting them to stay still when they need to move.
To prevent having to expend all of your energy on disciplining the children instead of teaching them, you can try making sure that long stretches of sitting down time are preceded by some movement activity and perhaps a drink of water, so that the kids can maintain their alertness. Remember, movement is what activates the brain and drives development forward!
When children are restive and you are unable to engage them, if they are not hungry, thirsty, or needing to use the bathroom, they need a brief movement break. If you tell the children that you see that they are having a hard time sitting still and say it is time for a quick movement break so that they can activate their brains, you will be helping all of the children recognize when their alertness levels are starting to flag, and teaching them some strategies to maintain a good arousal state for learning.
Something else that I would like to see change in the classroom is forcing the children to sit "Criss Cross Applesauce" during circle time. I have been to several classroom observations where the teacher flatly refused to continue until all of the children, including my little friends, who can't maintain an upright posture while in that position, were sitting like this.
Can you sit that way? I can't. People in Western society, who spend almost all of their time in chairs and in cars, have lost the ability to squat or to sit comfortably on the floor. If you observe most preschoolers sitting on the floor with their legs crossed, their spines are quite rounded. This is an unhealthy way to use the spine, and leads to back problems as we get older. I have vivid memories myself of sitting on the floor in kindergarten and feeling how hunched and rounded my back was and how miserable I felt. We should not be teaching our children to use themselves badly! As we get older and become more accustomed to slouching, something we learn in school when we are forced to do things we are not ready or able to manage, we cause ourselves real damage. Schools should not be perpetrating this on the children in their trust.
A few better options: teaching the children to sit on their heels, allowing them to lie on their bellies with their elbows propping them up, or providing them with a firm cushion, like a meditation cushion, or zafu.
It's an unfortunate truth in American society that we have lost our attention spans, and it's true in the classroom as well. I have been to many classroom observations where an activity, especially in a lower grade, went on for too long. Is circle time just going on for too long? Is it too close to lunch? Are the children having a hard time comporting themselves because they are tired of sitting still, hungry, thirsty, or have to go to the bathroom?
Some keys to success: keep circle time short, keep the lesson engaging and the children's participation active, provide postural support and discreet fidget toys to those who need them, and don't force children who really can't manage the physical proximity to sit close to their classmates.
Wednesday, February 2, 2011
Why Can't My Child Behave During Circle Time?
Which do you prefer in a restaurant, a banquette or a table in the middle of the room? Banquette, right? Ever wonder why? It's because your back is not exposed to people walking or moving behind you, so it's easier to let down your guard and focus on the meal and conversation.
A sensory defensive, hypervigilant child can't truly concentrate with his back exposed, which is often the case during circle time.
Circle time is often one of the most difficult school related activities for the children I treat. Over the years I have seen so many children fail to meet the grownups' expectations when they are required to sit quietly and attend while on the floor. They can't pay attention, they move around, they speak out of turn, they lie down, they tune out, they lash out.
Why are they acting out during such a seemingly innocuous time of day?
Circle time often means sitting very close to the person next to you, with no furniture to help guard and define the boundaries of your personal space. Children who are tactile defensive generally don't like to sit in close proximity to others, especially to other children, who are less predictable, and therefore potentially more threatening, than adults. I have often seen tactile defensive children suddenly, and with no warning, strike out at classmates who were sitting too close by. It's like petting a cat who suddenly startles and scratches. The light touch receptors gets overloaded, a switch gets tripped, the fight or flight part of the brain instructs the system to defend itself against an attacker, and... pow!
Another circle time issue I have observed over and over: two children get into an altercation while sitting next to a sensory defensive child, who immediately joins the fray. Again, it's that primitive part of the brain getting tripped, and the child not having yet developed the restraint to be able to override the primitive response.
Children who don't have good extensor tone or a strong trunk have a terribly difficult time sitting unsupported on the floor. If you observe a child sitting on the floor with his legs in front of him, and his arms extended behind him to support his upper body, he's working so hard to keep himself there that there's not much energy or attention left over to devote to the day's lesson.
If a child simply can't sit still during circle time, moving and changing his position constantly, chances are he's very uncomfortable and is seeking a way to hold his body that won't be effortful and/or painful to maintain.
Children with sensory issues don't possess good regulation of their arousal and alertness levels. Their engine speeds are often too low, and they struggle to stay present and tuned in during class. Often, when you put them so close to the ground, gravity beckons, and they're gone. This is especially true if they're not good breathers, eaters or sleepers; their bodies are chronically short on oxygen, fuel and neurotransmitters.
Judging from my observations over the years, circle time is not very exciting, especially to an under reactive child who requires more intensity than others to maintain his engagement and arousal in a classroom setting. Generally circle time consists of a lot of talking and no movement, so a child who needs to move, and a certain amount of excitement in order to keep himself tuned in, doesn't have much luck on the floor.
Have you ever been to a movie or a live performance that went way over your head or was just really, really, boring? Did you tune out, daydream, or even fall asleep?
Auditory defensive children often unconsciously block out voices and language, so if they are sitting on the floor, struggling to remain upright, a bit short of sleep, oxygen and nutrition, and don't really comprehend what's happening... can you blame them for not being fully present?
In my next post, I'll talk about some of ways I help the children I treat manage circle time.
|
% ================================================================= % == == % == An Introduction to ARTIFICIAL INTELLIGENCE == % == Janet Finlay and Alan Dix == % == UCL Press, 1996 == % == == % ================================================================= % == == % == chapter 2, page 35: non-monotonic reasoning == % == TMS - truth maintenace system == % == == % == Prolog example, Alan Dix, August 1996 == % == == % ================================================================= % Items and their dependencies are coded using the two % predicate 'item'. % Each item is either of the form: % item( item number, description, sl([positive support],[negative]) ). % which says that the given item is dependent on the positive list % being true and the negative list all false, % or: % item( item number, description, unknown ). % which says that the item has not been given a support list and its % truth is unknown. This is different from the empty support list % 'sl([],[])', which says that the item does not depend on anything % - it is a fact. % % In the book the items in the support list are given a + or - sign % to say whether the item depends on their truth or falsity. % However, in this Prolog representation, this is assumed from which % list they are in. item( 1, 'it is winter', sl([],[]) ). item( 2, 'it is cold', sl([1],[3]) ). item( 3, 'it is warm', unknown ). % Before starting the truth maintenance procedure an easier % to use dependency representation is constructed. % For each pair X, Y where X depends on Y, a Prolog term % is asserted tms_dep(X,Y,true) if it is a positive dependency % and tms_dep(X,Y,false) if it is a negative dependency. % This would be a very verbose representation to write, but % is easier to use in Prolog than the list based representation. % tms_clear_dep - removes and existing dependency information % tms_build_dep - actually builds the new representation % tms_build_dep_2 - runs through a list of items adding a dependency % for each tms_clear_dep :- retractall(tms_dep(X,Y,PorN)), assert(tms_dep(-1,-2,null)). % null entry so that % tms_dep always exists tms_build_dep :- item(Item,Desc,sl(Pos,Neg)), tms_build_dep_2(Item,Pos,true), tms_build_dep_2(Item,Neg,false), fail. tms_build_dep. tms_build_dep_2(Item,[],PorN). tms_build_dep_2(Item,[First|Rest],TorF) :- assert(tms_dep(Item,First,TorF)), tms_build_dep_2(Item,Rest,TorF). % tms_update completely rebuilds the truth values % by setting everything to false and then setting % the given facts to true using tms_do_set. % (N.B. given facts = empty support list - 'sl([],[])') % The setting of facts to true uses 'tms_do_set' % which chases dependencies setting other items to % true or false appropriately. % At the end the whole system will be at a consistent truth state. tms_update :- retractall(is_true(X,Y)), !, tms_set_all_false, tms_set_facts_true. tms_set_all_false:- item(Item,Desc,SL), assert(is_true(Item,false)), fail. tms_set_all_false. tms_set_facts_true:- item(Item,Desc,sl([],[])), tms_do_set(Item,true), fail. tms_set_facts_true. % tms_do_set sets the relevant item to be either true or false % dependent on the second argument. % If there is no change, then tms_do_set simply succeeds and does % nothing. % If there is a change, then this may have altered the truth of % other items. In this case tms_do_dep is called to check and, % if necessary, to update any dependent items tms_do_set(Item,TorF) :- is_true(Item,TorF),!. % do nothing if no change tms_do_set(Item,TorF) :- is_true(Item,NotTorF), retract(is_true(Item,NotTorF)), fail. % remove old fact if set tms_do_set(Item,TorF) :- assert(is_true(Item,TorF)), !, % end of sequential part tms_do_dep(Item,TorF). % tms_do_dep is called when an item changes its truth. % It checks to see if any dependents of the item should change % their truth. This can happen for two reasons. % (i) If an item now agrees with a dependency then the dependent % item MAY become true, but all other dependents must also be % checked by calling tms_recheck % (ii) If the item now disagrees witha dependency, the dependent % item MUST become false. No other checking is necessary. tms_do_dep(Item,TorF) :- tms_dep(X,Item,DepTorF), % item now agrees with dependency tms_do_dep2(X,TorF,DepTorF), fail. tms_do_dep(Item,TorF). tms_do_dep2(X,TorF,TorF) :- % item now agrees with dependency tms_recheck(X). % X may now be true, check it tms_do_dep2(X,TorF,NotTorF) :- not TorF = NotTorF, % item now disagrees with dependency tms_do_set(X,false). % X must therefore be false % An item must be false if any of its dependent items disagree. % That is if the item is dependent on another item X and % either X is false and is a positive dependent % or X is true and is a negative dependent tms_is_false(Item) :- tms_dep(Item,X,TorF), is_true(X,NotTorF), not TorF = NotTorF. % tms_recheck looks at dependencies to see if an Item % which used to be false has now become true tms_recheck(Item) :- is_true(Item,true). % if it is true already do nothing tms_recheck(Item) :- is_true(Item,false), not tms_is_false(Item), % are any dependencies wrong? tms_do_set(Item,true). % if not it must be true! % tms_init and tms_set are the predicates that you use to interact % with the truth maintenance system. % tms_init should be called when you start and subsequently % if you want to reset the system. % tms_set can only be used to set the truth of falsehood of any item % which has an 'unknown' value. Other values are calculated using % the dependencies in the support lists. tms_init :- tms_clear_dep, fail. tms_init :- tms_build_dep, fail. tms_init :- tms_update. tms_set(Desc,TorF) :- item(Item,Desc,unknown),!, tms_do_set(Item,TorF). tms_set(Desc,TorF) :- item(Item,Desc,sl(Pos,Neg)), write('Item '), write(Item), write(': '), write(Desc), write(' has support list, cannot be set true or false'), nl, fail. % tms_list prints the truth/falsity of all items tms_list:- item(Item,Desc,SL), is_true(Item,TorF), write(Item), write(': '), write(Desc), write(' - '), write(TorF), nl, fail. tms_list. % RUNNING THIS CODE % % The example given is a bit small (3 items!), so there is not % much you can do. However, even with three items you can check % things function as expected. here is an example script: % % tms_init. % necessary as first action % tms_dep(X,Y,T). % examine the constructed dependencies % is_true(X,T). % see what has been deemed to be true % tms_set('it is winter',false). % see the error message % tms_set('it is warm',true). % really set an item % tms_list. . % see how it has affected the other items % % After doing this you can try an build a more extensive set of % items and support lists by making your own 'item' definitions %
|
@article {Gall:2011:1938-6478:80,title = "Assessment of Suitable Drinking Water Technologies for Disinfection of DNA Viruses: Providing Global Safe Water", journal = "Proceedings of the Water Environment Federation", parent_itemid = "infobike://wef/wefproc", publishercode ="wef", year = "2011", volume = "2011", number = "3", publication date ="2011-01-01T00:00:00", pages = "80-83", itemtype = "ARTICLE", issn = "1938-6478", url = "http://www.ingentaconnect.com/content/wef/wefproc/2011/00002011/00000003/art00010", doi = "doi:10.2175/193864711802863706", keyword = "PRD1, adenovirus, virus inactivation, Disinfection, drinking water" author = "Gall, Aimee and Shisler, Joanna L. and Mari{\~n}as, Benito J.", abstract = "Waterborne pathogens are increasingly a worldwide concern in drinking water because of their ability to cause high levels of morbidity and mortality. Especially in developing regions, a lack of access to safe drinking water, adequate sanitation, and resources to implement water treatment processes contributes to the spread of these pathogens. Without adequate protection of drinking water sources, waters can become heavily contaminated by human and animal waste which contributes to the spread of a range of pathogens including viruses. Because of the high prevalence of waterborne diseases, current and emerging technologies for water disinfection are important to study in these areas. Point-of-use disinfection technologies are a viable treatment method in developing regions and implementation has showed an improvement in health and the potential for a sustainable solution; however, many systems currently used are not always completely effective in these challenging surface waters common to developing regions. Small community scale systems are also common in developing regions, but sometimes do not have adequate disinfection steps to prevent the spread of disease. Emerging pathogens are also of concern in water treatment for communities in developed regions as they can be highly resistant to certain treatment technologies. Viruses are of particular concern not only because of their virulence and ability to have high resistance to inactivation, but also because of the limited knowledge available. Viruses pathogenic to humans are not easy to study in the laboratory or in the field because of strict biosafety regulations. Additionally, human viruses typically require the use of cell cultures which are time consuming to propagate, expensive, easily contaminated, and require specific conditions for growth that can be nearly impossible to achieve in regions that have no access to electricity. Because of these challenges, there is a need to identify appropriate viral pathogen surrogates for testing the robustness of treatment technologies in the field and laboratory.A human pathogenic DNA virus, adenovirus, is present globally in drinking water sources and can cause of a variety of human health effects. Adenovirus is known to be highly resistant to disinfection technologies such as ultraviolet (UV) light, combined chlorine, and solar disinfection (SODIS). One of the most commonly used surrogates is the single-stranded RNA bacteriophage MS2, which does not show similar inactivation to some human viruses including adenovirus. Because of its similar size, morphology, and genome replication mechanism, the DNA bacteriophage PRD1 is a promising surrogate for adenovirus. Additionally, researchers have hypothesized that the two viruses are evolutionarily related. PRD1 has numerous gramnegative bacteria host organisms including Escherichia coli and Salmonella typhimurium. This research investigates if PRD1 with appropriate host is a proper surrogate for adenovirus serotype 2 when exposed to chemical disinfectants, ultraviolet light, and sunlight, through the comparison of inactivation kinetics. Since the two viruses have such similar capsid structures, using PRD1 as a surrogate for adenovirus may help to elucidate mechanisms of inactivation of adenovirus. Elucidating the mechanism of inactivation of the virus could then lead to the development of more robust drinking water disinfection technologies and the development of sensors to detect viruses in drinking water. Identifying a surrogate would be exceptionally useful for furthering laboratory research and for improving drinking water disinfection systems globally.", }
|
Disability: Eugenics and United States
Only available on StudyMode
• Download(s) : 3083
• Published : May 9, 2013
Open Document
Text Preview
Taking historical perspectives discuss the social construction of disability
The purpose of this essay is to discuss the social construction of disability in the eighteenth and nineteenth century. I will do this by taking a historical perspective on eugenics and by looking at how disability has been viewed and treated in the past and present. This historical perspective will draw links between eugenics, common day stereotypes associated with persons with disabilities and how professionals use their skills to try and cure disability (medical model). The term eugenics was created by Francis Galton, a cousin of Charles Darwin and a scientist who laid foundations of the theory of evolution and transformed the way about the natural world. (Glad, 2006). Galton defined eugenics as the study of hereditary improvement of the human race by controlled selective breeding. The story of eugenics is the best ways of explaining the social aspects of science, technology and the power of medical professionals which is often not questioned and too ignored. Mitchell and Snider (2003) state that people with disabilities were excluded from the society and this was based upon the power of scientific and management systems. This view has been supported by Albrecht et al (2001) who argued that the medical model of defining and classifying disability became heavily accepted in the eighteenth and nineteenth century. Albrecht et al (2001) state that professionals used their scientific methods to point out those who were called feeble-minded, psychopathic, epileptic, suffering from mental disease or any kind of impairment and these were identified as being nothing more than threat or burden in the state. Compelling the work of eugenics into account Glade (2006) notes that Galton proposed that it would be possible to screen out inferiors producing a better future generation and population. The bodies which were labeled as “defective became the central point of violent European and American efforts to engineer a `healthy` body politic”. (Mitchell and Snider, 2003, pp.843) The belief and the control of heredity played a significant role in the history of social construction of disability in European countries. One of the programs carried out during the eighteenth and the nineteenth centuries, is the sterilisation programs that targeted people with impairments in the United States of America. According to Kerr and Shakespeare (2002), the United States of America`s compulsory sterilisation had a major impact and it became a favorable solution. Oliver and Barnes (1998) state that economic and cultural development provided power for developing policies of exclusion; hence throughout this period, the policy of segregating people with impairments into institutional settings slowly increased and extended to other disadvantaged groups. People with impairments, lower classes and ethnic minorities were mostly affected, as they constituted the bulk of the inmates. (Kerr and Shakespeare, 2002). According to Kerr and Shakespeare (2002), there were concerns about the declining birth rate of amongst the middle classes and the uncontrolled reproduction of the unfit amongst the lower classes. In this response middle classes expanded regulations by imposing immigration restriction regulations to prevent what they called inferiors soiling the population with inferior genes. (Barnes and Oliver, 1998). The laws which discriminated people with impairments became priority in the history of social construction of disability. (Barness and Oliver, 1998). In support of Oliver and Barnes’s argument , Glad (2006) state that it is discrimination for someone to decide which characteristics are worthy enough to be part of society and which are not. Glad argues that it is the society`s duty to discriminate against the disease but not against the victims. The United States Supreme Court Buck v Bell (1927) passed the law to sterilize people with intellectual disabilities and...
tracking img
|
Broad Bean Health Benefits
Broad bean or fava bean scientifically known as Vicia faba is a member of the vetch family and grows mostly in temperate regions. Broad beans are an excellent vegetable source of protein and fibre.
Fresh braod beans are an excellent source of folates. 100 g beans provide 423 grams or 106% of folates.
Adequate folate in the diet around conception, and during pregnancy may help prevent neural-tube defects in the newborn baby.
In addition to being an excellent source of nutrients that support cardiovascular health, broad beans are high in dietary fiber, providing 9 g per 1/4 cup.
Legumes such as lima beans are a source of both types of dietary fiber, soluble and insoluble, but are particularly rich in soluble fiber.
Consuming soluble, fiber-rich foods may help improve your blood sugar and cholesterol levels.
Broad bean can have major benefits to an individual’s health and wellbeing and have been linked with a large list of disease and obesity fighting vitamins, nutrients and compounds including: very low calorie counts, which can help maintain a healthy diet or assist with weight loss management, and hence reduce risk of heart disease and cholesterol; supporting the control of blood sugar levels; being a rich source of L-dopamin which can help prevent or treat Parkinson’s; as well as supposedly being a rather good aphrodisiac.
RSS feed for comments on this post · TrackBack URL
Leave a Comment
Evolution Slimming Ltd
|
From Wikipedia, the free encyclopedia
(Redirected from Sound synthesis)
Jump to: navigation, search
Synthesizers were first used in pop music in the 1960s. In the 1970s, synths were used in disco, especially in the late 1970s. In the 1980s, the invention of the relatively inexpensive, mass market Yamaha DX7 synth made digital synthesizers widely available. 1980s pop and dance music often made heavy use of synthesizers. In the 2010s, synthesizers are used in many genres of pop, rock and dance music. Contemporary classical music composers from the 20th and 21st century write compositions for synthesizer.
Synthesizers before 19th century
Rudolph Koenig's sound synthesizer in 1865:
consists of tuning forks, electromagnets, and Helmholtz resonators.
The beginnings of the synthesizer are difficult to trace, as it is difficult to draw a distinction between synthesizers and some early electric or electronic musical instruments.[1][2]
Early electric instruments[edit]
One of the earliest electric musical instruments, the musical telegraph, was invented in 1876 by American electrical engineer Elisha Gray. He accidentally discovered the sound generation from a self-vibrating electromechanical circuit, and invented a basic single-note oscillator. This musical telegraph used steel reeds with oscillations created by electromagnets transmitted over a telegraph line. Gray also built a simple loudspeaker device into later models, consisting of a vibrating diaphragm in a magnetic field, to make the oscillator audible.[3][4]
Early additive synthesizer: tonewheel organs[edit]
In 1897, Thaddeus Cahill invented the Telharmonium (or Dynamophone), which used dynamos (early electric generator),[5] and was capable of additive synthesis like the Hammond organ, which was invented in 1934. Cahill built 3 versions of the instrument, the first of which, weighed over two tons. Cahill's business was unsuccessful for various reasons (size of system, rapid evolutions of electronics, crosstalk issues on the telephone line etc.), and similar but more compact instruments were subsequently developed, such as electronic and tonewheel organs.
Emergence of electronics and early electronic instruments[edit]
Left: Theremin (RCA AR-1264; 1930). Middle: Ondes Martenot (7th-generation model in 1978). Right: Trautonium (Telefunken Volkstrautonium Ela T42; 1933).
Graphical sound[edit]
Subtractive synthesis and polyphonic synthesizer[edit]
Hammond Novachord (1939) and Welte Lichtton orgel (1935)
Monophonic electronic keyboards[edit]
Other innovations[edit]
Hugh Le Caine's Electronic Sackbut (1948) and Yamaha Magna Organ (1935)
In Japan, as early as in 1935, Yamaha released Magna organ,[19] a multi-timbral keyboard instrument based on electrically blown free reeds with pickups.[20] It may have been similar to the electrostatic reed organs developed by Frederick Albert Hoschke in 1934 and then manufactured by Everett and Wurlitzer until 1961.
Electronic music studios as sound synthesizers[edit]
Synthesizer (left) and an audio console at the Studio di fonologia musicale di Radio Milano (of RAI) (1955–1983; renewed in 1968)
Origin of the term "sound synthesizer"[edit]
From modular synthesizer to popular music[edit]
The Moog modular synthesizer of 1960s–1970s
Robert Moog built his first prototype between 1963 and 1964, and was then commissioned by the Alwin Nikolais Dance Theater of NY;[34][35] while Donald Buchla was commissioned by Morton Subotnick.[36][37] In the late 1960s to 1970s, the development of miniaturized solid-state components allowed synthesizers to become self-contained, portable instruments, as proposed by Harald Bode in 1961. By the early 1980s, companies were selling compact, modestly priced synthesizers to the public. This, along with the development of Musical Instrument Digital Interface (MIDI), made it easier to integrate and synchronize synthesizers and other electronic instruments for use in musical composition. In the 1990s, synthesizer emulations began to appear in computer software, known as software synthesizers. From 1996 onward, Steinberg's Virtual Studio Technology (VST) plug-ins – and a host of other kinds of competing plug-in software, all designed to run on personal computers – began emulating classic hardware synthesizers, becoming increasingly successful at doing so during the following decades.
In 1974, Roland Corporation released the EP-30, the first touch-sensitive electronic keyboard.[46]
Polyphonic keyboards and the digital revolution[edit]
In 1973, Yamaha developed the Yamaha GX-1, an early polyphonic synthesizer.[47] Other polyphonic synthesizers followed, mainly manufactured in Japan and the United States from the mid-1970s to the early-1980s, and included Roland Corporation's RS-101 and RS-202 (1975 and 1976) string synthesizers,[48][49] the Yamaha CS-80 (1976), Oberheim's Polyphonic and OBX (1975 and 1979), Sequential Circuits' Prophet-5 (1978), and Roland's Jupiter 4 and Jupiter 8 (1978 and 1981). The success of the Prophet 5, a polyphonic and microprocessor-controlled keyboard synthesizer, aided the shift of synthesizers towards their familiar modern shape, away from large modular units and towards smaller keyboard instruments.[50] This form factor helped accelerate the integration of synthesizers into popular music, a shift that had been lent powerful momentum by the Minimoog, and also later the ARP Odyssey.[51] Earlier polyphonic electronic instruments of the 1970s, rooted in string synthesizers before advancing to multi-synthesizers incorporating monosynths and more, gradually fell out of favour in the wake of these newer, note-assigned polyphonic keyboard synthesizers.[52]
In 1973,[53] Yamaha licensed the algorithms for the first digital synthesis algorithm, frequency modulation synthesis (FM synthesis), from John Chowning, who had experimented with it since 1971.[54] Yamaha's engineers began adapting Chowning's algorithm for use in a commercial digital synthesizer, adding improvements such as the "key scaling" method to avoid the introduction of distortion that normally occurred in analog systems during frequency modulation.[55] In the 1970s, Yamaha were granted a number of patents, under the company's former name "Nippon Gakki Seizo Kabushiki Kaisha", evolving Chowning's early work on FM synthesis technology.[56] Yamaha built the first prototype digital synthesizer in 1974.[53] Yamaha eventually commercialized FM synthesis technology with the Yamaha GS-1, the first FM digital synthesizer, released in 1980.[57] The first commercial digital synthesizer released a year earlier, the Casio VL-1,[58] released in 1979.[59]
The Fairlight CMI of the late 1970s-early 1980s.
The Yamaha DX7 of 1983.
In 1983, however, Yamaha's revolutionary DX7 digital synthesizer[53][64] swept through popular music, leading to the adoption and development of digital synthesizers in many varying forms during the 1980s, and the rapid decline of analog synthesizer technology. In 1987, Roland's D50 synthesizer was released, which combined the already existing sample-based synthesis[note 3] and the onboard digital effects,[65] while Korg's even more popular M1 (1988) now also heralded the era of the workstation synthesizer, based on ROM sample sounds for composing and sequencing whole songs, rather than solely traditional sound synthesis.[66]
The Clavia Nord Lead series released in 1995.
Throughout the 1990s, the popularity of electronic dance music employing analog sounds, the appearance of digital analog modelling synthesizers to recreate these sounds, and the development of the Eurorack modular synthesiser system, initially introduced with the Doepfer A-100 and since adopted by other manufacturers, all contributed to the resurgence of interest in analog technology. The turn of the century also saw improvements in technology that led to the popularity of digital software synthesizers.[67] In the 2010s, new analog synthesizers, both in keyboard instrument and modular form, are released alongside current digital hardware instruments.[68] In 2016, Korg announced the release of the Korg Minilogue, the first polyphonic analogue synth to be mass-produced in decades.
Impact on popular music[edit]
In the 1970s, electronic music composers such as Jean Michel Jarre,[69] Vangelis[70] and Isao Tomita,[42][41][71] released successful synthesizer-led instrumental albums. Over time, this helped influence the emergence of synthpop, a subgenre of new wave, from the late 1970s to the early 1980s. The work of German krautrock bands such as Kraftwerk[72] and Tangerine Dream, British acts such as Gary Numan and David Bowie, African-American acts such as George Clinton and Zapp, and Japanese electronic acts such as Yellow Magic Orchestra and Kitaro, were influential in the development of the genre.[73] Gary Numan's 1979 hits "Are 'Friends' Electric?" and "Cars" made heavy use of synthesizers.[74][75] OMD's "Enola Gay" (1980) used distinctive electronic percussion and a synthesized melody. Soft Cell used a synthesized melody on their 1981 hit "Tainted Love".[73] Nick Rhodes, keyboardist of Duran Duran, used various synthesizers including the Roland Jupiter-4 and Jupiter-8.[76]
Chart hits include Depeche Mode's "Just Can't Get Enough" (1981),[73] The Human League's "Don't You Want Me"[77] and Giorgio Moroder's "Take My Breathe Away" (1985) for Berlin. Other notable synthpop groups included New Order,[78] Visage, Japan, Men Without Hats, Ultravox,[73] Spandau Ballet, Culture Club, Eurythmics, Yazoo, Thompson Twins, A Flock of Seagulls, Heaven 17, Erasure, Soft Cell, Pet Shop Boys, Bronski Beat, Kajagoogoo, ABC, Naked Eyes, Devo, and the early work of Tears for Fears and Talk Talk. Giorgio Moroder, Brian Eno, Phil Collins, Howard Jones, Stevie Wonder, Peter Gabriel, Thomas Dolby, Kate Bush, Enya, Mike Oldfield, Dónal Lunny, Frank Zappa and Todd Rundgren all made use of synthesizers.
The synthesizer became one of the most important instruments in the music industry.[73]
Types of synthesizers[edit]
Sound synthesis[edit]
Additive synthesis builds sounds by adding together waveforms into a composite sound. Instrument sounds are simulated by matching their natural harmonic overtone structure. Early analog examples of additive synthesizers are the Teleharmonium, Hammond organ, and Synclavier.
Subtractive synthesis is based on filtering harmonically rich waveforms. It is implemented in early synthesizers such as the Moog synthesizer. Subtractive synthesizers approximate instrumental sounds by a signal generator (producing sawtooth waves, square waves, etc.) followed by a filter. The combination of simple modulation routings (such as pulse width modulation and oscillator sync), along with the lowpass filter, is responsible for the "classic synthesizer" sound commonly associated with "analog synthesis."
FM synthesis was hugely successful in earliest digital synthesizers.
Phase distortion synthesis is a method implemented on Casio CZ synthesizers. It replaces the traditional analog waveform with a choice of several digital waveforms which are more complex than the standard square, sine, and sawtooth waves. This waveform is routed to a digital filter and digital amplifier, each modulated by an eight stage envelope. The sound can then be further modified with ring modulation or noise modulation. source:
Physical modelling synthesis is the synthesis of sound by using a set of equations and algorithms to simulate each sonic characteristic of an instrument, starting with the harmonics that make up the tone itself, then adding the sound of the resonator, the instrument body, etc., until the sound realistically approximates the desired instrument. When an initial set of parameters is run through the physical simulation, the simulated sound is generated. Although physical modeling was not a new concept in acoustics and synthesis, it was not until the development of the Karplus-Strong algorithm and the increase in DSP power in the late 1980s that commercial implementations became feasible. The quality and speed of physical modeling on computers improves with higher processing power.
Analysis/resynthesis is typically used on the vocoder.
Sample-based synthesis involves digitally recording a short snippet of sound from a real instrument or other source and then playing it back at different speeds to produce different pitches. A sample can be played as a one shot, used often for percussion or short duration sounds, or it can be looped, which allows the tone to sustain or repeat as long as the note is held. Samplers usually include a filter, envelope generators, and other controls for further manipulation of the sound. Virtual samplers that store the samples on a hard drive make it possible for the sounds of an entire orchestra, including multiple articulations of each instrument, to be accessed from a sample library.. See also Wavetable synthesis, Vector synthesis.
Imitative synthesis[edit]
Basic components of an analogue subtractive synthesizer
analogue synth components
Various filter modes.
Attack Decay Sustain Release (ADSR) envelope[edit]
Schematic of ADSR
Attack Decay Sustain Release
Key on off
Inverted ADSR envelope
8-step envelope on Casio CZ series
A common feature on many synthesizers is an AD envelope (attack and decay only). This can be used to control e.g. the pitch of one oscillator, which in turn may be synchronized with another oscillator by oscillator sync.
LFO section of Access Virus C
[verification needed]
Arpeggiators seem to have grown from the accompaniment system used in electronic organs in the mid-1960s to the mid-1970s.[84] They were also commonly fitted to keyboard instruments through the late 1970s and early 1980s. Notable examples are the RMI Harmonic Synthesizer (1974),[85] Roland Jupiter-8, Oberheim OB-8, Roland SH-101, Sequential Circuits Six-Trak and Korg Polysix. A famous example can be heard on Duran Duran's song "Rio", in which the arpeggiator on a Roland Jupiter-4 plays a C minor chord in random mode. They fell out of favor by the latter part of the 1980s and early 1990s and were absent from the most popular synthesizers of the period but a resurgence of interest in analog synthesizers during the 1990s, and the use of rapid-fire arpeggios in several popular dance hits, brought with it a resurgence.
Control interfaces[edit]
Non-contact interface (AirFX)
Tangible interface (Reactable)
Pitch & mod. wheels and touchpad
Drum pad
Guitar-style interface (SynthAxe)
Fingerboard controller[edit]
Left: Ondes Martenot (6G in 1960)
Right: Mixture Trautonium (replica of 1952)
on Korg monotron
Ribbon controller
on Moog 3P (1972)
A ribbon controller or other violin-like user interface may be used to control synthesizer parameters. The idea dates to Léon Theremin's 1922 first concept[87] and his 1932 Fingerboard Theremin and Keyboard Theremin,[88][89] Maurice Martenot's 1928 Ondes Martenot (sliding a metal ring),[90] Friedrich Trautwein's 1929 Trautonium (finger pressure), and was also later utilized by Robert Moog.[91][92][93] The ribbon controller has no moving parts. Instead, a finger pressed down and moved along it creates an electrical contact at some point along a pair of thin, flexible longitudinal strips whose electric potential varies from one end to the other. Older fingerboards used a long wire pressed to a resistive plate. A ribbon controller is similar to a touchpad, but a ribbon controller only registers linear motion. Although it may be used to operate any parameter that is affected by control voltages, a ribbon controller is most commonly associated with pitch bending.
Fingerboard-controlled instruments include the Trautonium (1929), Hellertion (1929) and Heliophon (1936),[94][95][96] Electro-Theremin (Tannerin, late 1950s), Persephone (2004), and the Swarmatron (2004). A ribbon controller is used as an additional controller in the Yamaha CS-80 and CS-60, the Korg Prophecy and Korg Trinity series, the Kurzweil synthesizers, Moog synthesizers, and others.
Wind controllers[edit]
Wind controller
Accordion synthesizer
Wind controllers (and wind synthesizers) are convenient for woodwind and brass players, being designed to imitate those instruments. These are usually either analog or MIDI controllers, and sometimes include their own built-in sound modules (synthesizers). In addition to the follow of key arrangements and fingering, the controllers have breath-operated pressure transducers, and may have gate extractors, velocity sensors, and bite sensors. Saxophone-style controllers have included the Lyricon, and products by Yamaha, Akai, and Casio. The mouthpieces range from alto clarinet to alto saxophone sizes. The Eigenharp, a controller similar in style to a bassoon, was released by Eigenlabs in 2009. Melodica and recorder-style controllers have included the Martinetta (1975)[97] and Variophon (1980),[98] and Joseph Zawinul's custom Korg Pepe.[99] A harmonica-style interface was the Millionizer 2000 (c. 1983).[100]
Trumpet-style controllers have included products by Steiner/Crumar/Akai, Yamaha, and Morrison. Breath controllers can also be used to control conventional synthesizers, e.g. the Crumar Steiner Masters Touch,[101] Yamaha Breath Controller and compatible products.[102] Several controllers also provide breath-like articulation capabilities. [clarification needed]
Accordion controllers use pressure transducers on bellows for articulation.
Other controllers include theremin, lightbeam controllers, touch buttons (touche d’intensité) on the ondes Martenot, and various types of foot pedals. Envelope following systems, the most sophisticated being the vocoder, are controlled by the power or amplitude of input audio signal. A musician uses the talk box to manipulate sound using the vocal tract, though it is rarely categorized as a synthesizer.
MIDI control[edit]
Synthesizers became easier to integrate and synchronize with other electronic instruments and controllers with the introduction of Musical Instrument Digital Interface (MIDI) in 1983.[103] First proposed in 1981 by engineer Dave Smith of Sequential Circuits, the MIDI standard was developed by a consortium now known as the MIDI Manufacturers Association.[104] MIDI is an opto-isolated serial interface and communication protocol.[104] It provides for the transmission from one device or instrument to another of real-time performance data. This data includes note events, commands for the selection of instrument presets (i.e. sounds, or programs or patches, previously stored in the instrument's memory), the control of performance-related parameters such as volume, effects levels and the like, as well as synchronization, transport control and other types of data. MIDI interfaces are now almost ubiquitous on music equipment and are commonly available on personal computers (PCs).[104]
Recent trends in synthesizer design, particularly the resurgence of modular systems in eurorack, have allowed for a hybrid of MIDI control and control voltage i/o to be found together in many models. (Examples being the Moog Model D reissue, which was enhanced from its original design to offer both MIDI i/o and CV i/o). In these models of MIDI/CV hybrids, it is often possible to send and receive control voltages to control parameters of equipment at the identical time MIDI messages are being sent and received.
Additional examples of MIDI/CV hybrids include models like the Arturia Minibrute, which is able to receive MIDI messages from an external controller and automatically convert the MIDI signal into gate and pitch notes, which it can then send out as control voltage.
Typical roles[edit]
Synth lead[edit]
Synth pad[edit]
A synth pad is a sustained chord or tone generated by a synthesizer, often employed for background harmony and atmosphere in much the same fashion that a string section is often used in orchestral music and film scores. Typically, a synth pad is performed using whole notes, which are often tied over bar lines. A synth pad sometimes holds the same note while a lead voice sings or plays an entire musical phrase or section. Often, the sounds used for synth pads have a vaguely organ, string, or vocal timbre. During the late 1970s and 1980s, specialized string synthesizers were made that specialized in cresting string sounds using the limited technology of the time. Much popular music in the 1980s employed synth pads, this being the time of polyphonic synthesizers, as did the then-new styles of smooth jazz and new-age music. One of many well-known songs from the era to incorporate a synth pad is "West End Girls" by the Pet Shop Boys, who were noted users of the technique.
Synth bass[edit]
Following the availability of programmable music sequencers such as the Synclavier and Roland MC-8 Microcomposer in the late 1970s, bass synths began incorporating sequencers in the early 1980s. The first bass synthesizer with a sequencer was the Firstman SQ-01.[107][108] It was originally released in 1980 by Hillwood/Firstman, a Japanese synthesizer company founded in 1972 by Kazuo Morioka (who later worked for Akai in the early 1980s), and was then released by Multivox for North America in 1981.[109][110][49]
A particularly influential bass synthesizer was the Roland TB-303.[111] Released in late 1981, it featured a built-in sequencer and later became strongly associated with acid house music.[112] Bass synthesizers began being used to create highly syncopated rhythms and complex, rapid basslines. Bass synth patches incorporate a range of sounds and tones, including wavetable-style, analog, and FM-style bass sounds, delay effects, distortion effects, envelope filters. In popular music, these techniques gained wide popularity with the emergence of acid house music, after Phuture's use of the TB-303 for the single "Acid Tracks" in 1987,[111] though such techniques were predated by Charanjit Singh's use of the TB-303 in 1982.[112]
Since their invention, there has been concern over synthesizers putting session musicians out of a job, since they can recreate the sounds of many instruments. Some musicians (especially keyboardists) viewed the synth as they would any musical instrument. Other musicians viewed the synth as a threat to traditional session musicians, and the British Musicians' Union attempted to ban it in 1982. The ban never became official policy.[113] Broadway plays are also now using synthesizers to reduce the number of live musicians required.[114]
Synthesizer music, especially synth-pop, has been described as "anaemic"[115] and "soulless".[116]
See also[edit]
13. ^ Warbo Formant Organ (photograph), 1937
41. ^ a b Tomita at AllMusic. Retrieved 2011-06-04.
45. ^ Chicory Tip (official website)
46. ^ FutureMusic, issues 131-134, 2003, page 55
47. ^ Yamaha GX-1, Vintage Synth Explorer
48. ^ Jenkins, Mark (2009). Analog Synthesizers: Understanding, Performing, Buying--From the Legacy of Moog to Software Synthesis. CRC Press. p. 89. ISBN 978-1-136-12278-1.
49. ^ a b A TALE OF TWO STRING SYNTHS, Sound on Sound, July 2002
53. ^ a b c "[Chapter 2] FM Tone Generators and the Dawn of Home Music Production". Yamaha Synth 40th Anniversary - History. Yamaha Corporation. 2014.
54. ^ Holmes, Thom (2008). "Early Computer Music". Electronic and experimental music: technology, music, and culture (3rd ed.). Taylor & Francis. p. 257. ISBN 0-415-95781-8. Retrieved 2011-06-04.
55. ^ Holmes, Thom (2008). "Early Computer Music". Electronic and experimental music: technology, music, and culture (3rd ed.). Taylor & Francis. pp. 257–8. ISBN 0-415-95781-8. Retrieved 2011-06-04.
56. ^ U.S. Patent 4,018,121
57. ^ Curtis Roads (1996). The computer music tutorial. MIT Press. p. 226. ISBN 0-262-68082-3. Retrieved 2011-06-05.
69. ^
70. ^
71. ^ "Snowflakes Are Dancing". Billboard. Retrieved 2011-05-28.
72. ^
73. ^ a b c d e Borthwick 2004, p. 120
77. ^ Borthwick 2004, p. 130
78. ^
79. ^ John M. Chowning (September 1973), The Synthesis of Complex Audio Spectra by Means of Frequency Modulation (PDF)
81. ^ Envelope (Sound) at Encyclopædia Britannica
89. ^ Glinsky, Albert (2000), Theremin: ether music and espionage, University of Illinois Press, p. 145, ISBN 978-0-252-02582-2, In addition to its 61 keys (five octaves), it had a "fingerboard channel" offering an alternate interface for string players.
97. ^ Christoph Reuter, Martinetta and Variophon,
98. ^ Christoph Reuter, Variophon and Martinetta Enthusiasts Page,
99. ^ Joseph Pepe Zawinul, (also another photograph is shown on gallery page)
101. ^ Crumar Steiner Masters Touch CV Breath Controller,, January 21, 2008
102. ^ Yamaha DX100 with BC-1 Breath Controller,, December 16, 2007
106. ^ Royalty Free Music : Funk – incompetech (mp3). Kevin MacLeod (
107. ^ "Firstman SQ-01 Sequence Synthesizer from Multivox" (advertisement). Contemporary Keyboard. Vol. 7 no. June 1981 - November 1981. p. 23.
108. ^ "Multivox Firstman SQ-01 Sequencer". Keyboard Report. Contemporary Keyboard. Vol. 7 no. October 1981. pp. 82, 88. ("Keyboard Report, Oct. '81", according to the "Vol.9, 1983". )
109. ^ "Firstman International". SYNRISE (in German). Archived from the original on 2003-04-20. FIRSTMAN existiert seit 1972 und hat seinen Ursprung in Japan. Dort ist dieFirma unter dem Markennamen HILLWOOD bekannt. HILLWOOD baute dann auch 1973 den quasi ersten Synthesizer von FIRSTMAN. Die Firma MULTIVOX liess ihre Instrumente von 1976 bis 1980 bei HILLWOOD bauen.","SQ-10 / mon syn kmi ? (1980) / Monophoner Synthesizer mit wahrscheinlich eingebautem Sequenzer. Die Tastatur umfasst 37 Tasten. Die Klangerzeugung beruht auf zwei VCOs.
110. ^ Mark Jenkins (2009), Analog Synthesizers, pages 107-108, CRC Press
111. ^ a b Vine, Richard (15 June 2011). "Tadao Kikumoto invents the Roland TB-303". The Guardian. Retrieved 9 July 2011.
112. ^ a b Aitken, Stuart (10 May 2011). "Charanjit Singh on how he invented acid house ... by mistake". The Guardian.
113. ^ "1981-1990 – The Musicians' Union: A History (1893-2013)".
114. ^ Green, Jesse (25 March 2007). "Notion - Digital Orchestras". The New York Times.
Further reading[edit]
• Kuit, Roland (2014). SoundLab I: The Electronic Studio. Publisher's number: 13664. The Netherlands, The Hague: Donemus.
• Kuit, Roland (2014). SoundLab II: Architectures for Philosophers. Publisher's number: 13665. The Netherlands, The Hague: Donemus.
• Kuit, Roland (2014). Laboratory of Patching: Illustrated Compendium of Modular Synthesis. Publisher's number: 13662. The Netherlands, The Hague: Donemus.
• Kuit, Roland (2014). To be On, to be OFF, that’s the SWITCH. Publisher's number: 13666. The Netherlands, The Hague: Donemus.
• Kuit, Roland (2014). Modular strategies in shaping reflections and space. Publisher's number: 13663. The Netherlands, The Hague: Donemus.
External links[edit]
|
Blog # 3: Rules of the Sociological Method & Suicide (Durkheim)
Durkheim idea of crime is a very interesting subject. As he mentioned, crime cannot be completely gone because it is needed to reaffirm the rules or the social order of society for what is right and what is wrong. I have not realize that life is like a game, where you can make the rules and the consequence. It is interesting to understand that even if there is little crime, there is some type of balance order because there cannot be a place where someone needs to rob food for their family.
Durkheim mentioned that we inherit the acts we perform in society, such as education. In my opinion, having objectives in to conform every day without any way of your control has made by the life that have been given buy you. In addition, our actions or decision sometimes is based on the outside influences, such as our neighbors or our group of people we hang out with. Also our inner thoughts will become a strange or foreign that because of the influence of other people that allow us to conform to what it is.
Another social issue that Durkheim looked into is suicide. He mentioned that suicide does not happen in areas that have a lot of alcoholism but in areas that lack social and moral regulation (interaction). Social interaction is important in communities because people feel that they are part of something important and they are involved in something that can give them pleasure and another kind of satisfaction. An example of social need of someone is a job team work building workshop where people need to need to have a trust of its neighboring worker to get the job done.
The two things that Durkheim notices that was missing in people who commit suicide, as I mentioned before is social integration and moral regulation. He states that is egoism and anomie. People who live in cities usually do not have anyone with them close and they live alone, which they are likely to commit suicide (egoism). People who do not have any morals to guide them, lost souls I can say who do not have the goal to move forward or do not know what direction to go, usually commit suicide (anomie). It is pretty interesting that suicide back in Durkheim period is similar to what todays similar ideas of someone killing themselves. In the New York Times there was an article that discussed that middle-aged Americans committed suicide often in the past decade. “In 2010 there were 33,687 deaths from motor vehicle crashes and 38,3654 suicides”(Parker-Pope NYTIMES, 2013), the difference of these 2 deaths is about 5,000. It is hard to believe that Americans in the ages of 35-64, men and women, are willing to kill themselves. Sadly, suicide is a stigma a lot of people do not want to talk about. However having a discussion and understand the issues each individual had to go through would shed some light for the reasons they wanted to end their lives and make us, the future people for making decisions in society, not make the same mistakes or help people out.
Also, as a personal note, thinking about suicide is something I think many people thought of, including myself. However, it is easy for one person to give from all the stress and the unknown road that is taken without the support of other people is pretty hard for one person to hold on their shoulder.
Edles, Laura and Appelrouth, Scott. 2010. SocioloticalTheory in the Classical Era: text and readings 2nd edition. Pine Forge Press Sage Publications.
Parker-Pope, Tara. New York Times (ONLINE). (2013, May2). “Sucide Rates Rise Sharply in U.S.” Accessed on Feburary 24, 2014.
Leave a Reply
You are commenting using your account. Log Out / Change )
Twitter picture
Facebook photo
Google+ photo
Connecting to %s
|
Keep Feeling Fascination
When children are fascinated by something, they want to do it all the time. They can also show the most incredible concentration. I’ve seen a child scoop the seeds out of a pumpkin for fifteen minutes and more, so she could get the inside of it clean. I’ve seen a child spend hours putting rocks in a digger, tipping them out, mixing soil and water, to create a universe of his own design. I’ve seen a child plant seeds, then return time and again, so she can watch them grow into a flower or a vegetable. I’ve seen a child so interested in dinosaurs that you could show him any dinosaur picture, and he would tell you everything you ever wanted to know (and more) about it. Magically, when children are small, learning seems to follow fascination. Fascination helps them make exponential leaps.
As children grow, there are many things we need to teach them, as opposed to letting them flit around and learn in their fascinated way all the time. Reading and writing need to be taught, preferably alongside parents. (Those Dinosaur books didn’t read themselves.) We get to introduce children to a wider and deeper understanding of their world, as they become more able to understand it. We show them lots of stuff that might fascinate them, but somewhere along the way, we seem to stop focusing on what they are truly fascinated by. And this is the question that really interests me, especially when it comes to my own children. Not how we should best measure teaching standards. Not why learning should be hard, and my children must ‘attain mastery’. Not even why a growth mindset is better than the alternative. But how do I keep them feeling fascination?
This entry was posted in Children, Learning. Bookmark the permalink.
9 Responses to Keep Feeling Fascination
1. nancy says:
For me, the answer, as a parent, is to allow them their idiosyncracies, and the time to follow their fascinations. For my middle son, if there’s anything he doesn’t know about trains it’s not worth knowing. For many parents this fascination may be somewhat embarrassing, espcially as he heads towards 13, but hey, for me, that’s what being a loving parent is all about.
As a teacher, it seems to me that we need to have the space to get to know our children a bit better. When we have the time to unbend a little, to chat a little, we can find out what fascinates them, and use that fascination, somewhat sneakily, in our lessons. It might be the throwaway comment, the shared moment of humour, it might be something more formal entirely. With an overburdened timetable and bloated curriculum it seems a little unlikely, but I like to dream.
2. manyanaed says:
Be a scientist. Facinations come with the qualification.
3. You keep them fascinated by giving them the tools they need to make sense of the world. This includes a decent grounding, sometimes painfully acquired through lots of practise (challenge in life moulds our character), in maths and English. Thereafter, layers of history, music, science, theology, philosophy etc can be added to the enlightened mind of the young person.
I know many adults who were not taught the basics properly and have suffered ever since. They are extremely limited both in terms of life choices and in terms of what ‘fascinates’ them; for example, how could they read the works of Shakespeare if they have not been taught English to a sufficient standard (including a decent grounding in etymology)?
Give a child a toy digger and they are fascinated by a few rocks in a garden for about 5 minutes. Give a child a decent grounding in education and they have a whole universe of subject matter to be fascinated by for the rest of their lives.
4. Great post and comments. As we move into teenage years school requirements step up as does our awareness of where we stand in the quite contrived social environment of school. This is where fascination is lost for many as they try and do the work set, while also fitting into the given peer culture. Teens are safe if they happen to have a fascination for something that is shared by an adult in their life who will talk with them and show them how to develop. This may be a teacher, but with 30 kids in a class it’s hard to break through to connect on a personal level. Without this luck, it could be anything that the teenager ends up following. Good or bad. As I hit my 20’s I had no idea what fascinated me, because I’d tried so hard to fit into the small world that existed around me. That’s when a lady suggested that I be a ‘naughty child’ for a week i.e. do exactly what I feel like without self-judging. That’s when I got out a pen and paper and began writing about schooling. That’s my fascination, it’s always been so, but I had no idea that it was OK to follow a fascination or that by doing so my own creativity and determination would come alive like it never did as a hard working student under caring and dedicated teachers.
5. Interesting blog, but are you over stating the importance of being fascinated? Are you constantly fascinated by what is around you? Would you want to be constantly fascinated? Can fascination often get in the way of other feelings and can it impede learning and development if it is over present?
Leave a Reply
You are commenting using your account. Log Out / Change )
Twitter picture
Facebook photo
Google+ photo
Connecting to %s
|
Enlarge (credit: Getty | FRANCOIS GUILLOT )
Cheers!—not to your health, but to your memory.
Drinking alcohol after learning information appears to aid the brain’s ability to store and remember that information later, according to a study of at-home boozing in Scientific Reports. The memory-boosting effect—which has been seen in earlier lab-based studies—linked up with how much a person drank: the more alcohol, the better the memory the next day.
The study authors, led by psychopharmacologist Celia Morgan of University of Exeter, aren’t sure why alcohol improves memory in this way, though. They went into the experiment hypothesizing that alcohol blocks the brain’s ability to lay down new memories, thus freeing up noggin power to carefully encode and store the fresh batch of memories that just came in. In other words, after you start drinking, your ability to remember new things gets wobbly, but your memory of events and information leading up to that drink might be sturdier than normal.
Read 12 remaining paragraphs | Comments
|
What do ancient non-Christian sources tell us about the historical Jesus?
The Annals, by Roman historian Tacitus
The Annals, by Roman historian Tacitus
This article from Biblical Archaeology covers all the non-Christian historical sources that discuss Jesus.
About the author:
Lawrence Mykytiuk is associate professor of library science and the history librarian at Purdue University. He holds a Ph.D. in Hebrew and Semitic Studies and is the author of the book Identifying Biblical Persons in Northwest Semitic Inscriptions of 1200–539 B.C.E. (Atlanta: Society of Biblical Literature, 2004).
Here are the major sections:
• Roman historian Tacitus
• Jewish historian Josephus
• Greek satirist Lucian of Samosata
• Platonist philosopher Celsus
• Roman governor Pliny the Younger
• Roman historian Suetonius
• Roman prisoner Mara bar Serapion
And this useful excerpt captures the broad facts about Jesus that we get from just the first two sources:
We can learn quite a bit about Jesus from Tacitus and Josephus, two famous historians who were not Christian. Almost all the following statements about Jesus, which are asserted in the New Testament, are corroborated or confirmed by the relevant passages in Tacitus and Josephus. These independent historical sources—one a non-Christian Roman and the other Jewish—confirm what we are told in the Gospels:31
1. He existed as a man. The historian Josephus grew up in a priestly family in first-century Palestine and wrote only decades after Jesus’ death. Jesus’ known associates, such as Jesus’ brother James, were his contemporaries. The historical and cultural context was second nature to Josephus. “If any Jewish writer were ever in a position to know about the non-existence of Jesus, it would have been Josephus. His implicit affirmation of the existence of Jesus has been, and still is, the most significant obstacle for those who argue that the extra-Biblical evidence is not probative on this point,” Robert Van Voorst observes.32 And Tacitus was careful enough not to report real executions of nonexistent people.
2. His personal name was Jesus, as Josephus informs us.
3. He was called Christos in Greek, which is a translation of the Hebrew word Messiah, both of which mean “anointed” or “(the) anointed one,” as Josephus states and Tacitus implies, unaware, by reporting, as Romans thought, that his name was Christus.
4. He had a brother named James (Jacob), as Josephus reports.
5. He won over both Jews and “Greeks” (i.e., Gentiles of Hellenistic culture), according to Josephus, although it is anachronistic to say that they were “many” at the end of his life. Large growth
in the number of Jesus’ actual followers came only after his death.
6. Jewish leaders of the day expressed unfavorable opinions about him, at least according to some versions of the Testimonium Flavianum.
7. Pilate rendered the decision that he should be executed, as both Tacitus and Josephus state.
8. His execution was specifically by crucifixion, according to Josephus.
9. He was executed during Pontius Pilate’s governorship over Judea (26–36 C.E.), as Josephus implies and Tacitus states, adding that it was during Tiberius’s reign.
Some of Jesus’ followers did not abandon their personal loyalty to him even after his crucifixion but submitted to his teaching. They believed that Jesus later appeared to them alive in accordance with prophecies, most likely those found in the Hebrew Bible. A well-attested link between Jesus and Christians is that Christ, as a term used to identify Jesus, became the basis of the term used to identify his followers: Christians. The Christian movement began in Judea, according to Tacitus. Josephus observes that it continued during the first century. Tacitus deplores the fact that during the second century it had spread as far as Rome.
I remember reading the 1996 book by Gary Habermas entitled “The Historical Jesus: Ancient Evidence for the Life of Christ“. This book is a little before the time of most of you young Christian apologists, but back before the time of Lee Strobel and J. Warner Wallace, this is the stuff we all read. Anyway, in the book he makes a list of all that can be known about Jesus from external sources. And fortunately for you, you don’t have to buy the book because you can read chapter 9 of it right on his web site.
From Tacitus he gets this:
And from Josephus he gets this:
So when you are reading the New Testament, these facts are the framework that you read within. It’s a good starting point when dealing with people who have never looked into who Jesus was and what he taught and what his followers believed about him, right from the start.
Leave a Reply
WordPress.com Logo
Twitter picture
Facebook photo
Google+ photo
Connecting to %s
|
Lay Statements
Lay statements are any statements made by the veteran, family members, friends, neighbors, or service buddies who make an oral statement during a hearing, or make a written declaration or written statement (preferably an affidavit signed and notarized under oath) regarding the veterans disability.
Lay statements refer statements made by people lacking professional medical expertise, laypersons, or people who are not medical professionals, and therefore cannot make medical diagnosis or prove medical facts.
But as anyone knows there are many things that a lay person can observe, such as seeing the veteran bleeding, seeing the veteran injured, seeing the veteran walking with a limp, or describing the laypersons observations and so forth. This may even include stating a diagnosis a medical professional told the layperson. A layperson cannot provide a diagnosis of diabetes, but the layperson can document the diagnosis a medical provider stated to the veteran or the layperson.
Lay statements are useful to document events that a person who is not a medical provider observed regarding the veteran. These lay statements can fill in missing information. The more factual the statements the better.
The Federal Circuit in Jandreau v. Nicholson, 492 F.3d 1372 (Fed. Cir. 2007), stated there are times a lay person is competent to testify to a medical condition.
Jandreau held:
[l]ay evidence can be competent and sufficient to establish a diagnosis of a condition when (1) a lay person is competent to identify the medical condition, (2) the layperson is reporting a contemporaneous medical diagnosis, or (3) lay testimony describing symptoms at the time supports a later diagnosis by a medical professional.
If you have been denied a claim, and filed your first notice of disagreement (NOD) on or after June 20, 2007, please contact the Law Office of Robert B Goss, P.C. for assistance in appealing your denial.
An example of a good lay statement will be factual, provide enough details to verify the event or injury. For example:
I injured my knees during an attack at Choksong Korea, while assigned to the 1st Platoon, 1st Battalion, 7th Infantry Regiment, 3d Infantry Division. On January 1, 1951, our platoon came under enemy mortar attack while I was assigned to the front lines. We were atop a hilltop and freezing. Without thinking we decided to start a fire to get warm. Moments after we started the fire, the enemy forces started shelling our position with mortars. Our fire had become visible to every enemy soldier withing eyesight, we had created a beacon – – hey here we are. As the first shells landed, I leaped over the side of the hill, thinking it was about a 5 foot drop. Instead it was at least a 20 foot fall to a narrow ledge on the side of the hill. When I landed I severely injured my knees, and twisted my right ankle. However, I was still alive. The following morning our unit was rescued and I was hoisted off the ledge and carried to the aid station. Since that day, my knees have hurt me and I walk with a limp
An example of a lay statement that has no helpful information may include a grant against how stupid the VA’s, were something that I see quite often:
I was injured during the war.
No other information is provided. this type a statement does not provide the VA with any useful information to verify the injury. The exception would be during the hearing, where I could elicit more information regarding the combat operation.
On the other hand if the statement was “I was injured playing flickerball.” Yes even this most deadly a sports, is going to need something more than this statement.
This is where a buddy statement would be appropriate, stating that
you were assigned to Squadron Officer School with Lieut. Snuffy and during a thrilling flickerball game 2Lt. Snuffy was injured. I personally helped carry 2nd Lieut. Snuffy off the field, where he was then transported to the 43rd Air Base Wing’s hospital and treated for a broken ankle. Later that evening 2nd Lieutenant Snuffy return to the barracks with a pair of crutches and his leg in a cast. I remember this clearly, since Lieut. Snuffy was not a pilot, but just a navigator, his cast was only signed by paratroopers.
Hopefully my son the paratrooper has quit reading by now, and for that matter all my crewmates who are navigators. Remember I love you all.
On the other hand if you have a claim for VA disability benefits and you have been denied, filed your first NOD on or after June 20, 2007, and you need assistance please contact The Law Office of Robert B Goss, P.C. at
Contact Information
|
Filter +
religions and philosophies
(Philip Klinger/flickr)
Korean Thought
A Korean origin myth described in context of Korean society and as a comparison to Western thought. more
Shinto Priests marching at Meiji Temple Shrine (chrisjfry/Flickr)
A short introduction to Shinto, Japan's native belief system. more
Islamic Republic of Iran (h_de_c/Flickr)
Islamic Iran
An expansive essay on the history of Iran through the first great global age. more
A Qur'an from Indonesia. Courtesy Lontar Foundation.
Islamic Calligraphy and the Illustrated Manuscript
The calligraphic tradition, which grew out of the demand for illuminated Qur'ans, became an important art form worldwide. more
Muslim women in Indonesia (vikz/flickr)
Islam in Southeast Asia
Introduction to Southeast Asia
Praying with lanterns (
Historical and Modern Religions of Korea
An overview of Korea's mainstream religions, from Shamanism to Christianity. more
Vajra Pestle and Buddhist Scripture in SiChuan, China (Poorfish/Flickr)
Excerpts from Religious Texts
Chess is not just a game: it is entwined with laws, arts, sciences, and mathematics. more
Monks at Angkor Wat (Beggs/Flickr)
Diversity and Unity
|
...and that's why they call it Cleveland
Moses Cleaveland (January 29, 1754 – November 16, 1806) was a lawyer, politician, soldier, and surveyor fromConnecticut who founded the U.S. city of Cleveland, Ohio, while surveying the Western Reserve in 1796.
Cleaveland was born in Canterbury, Windham County, Connecticut. He studied law at Yale University, graduating in 1777. He was commissioned as an ensign in the 2nd Connecticut Regiment of the Continental Army. In 1779 he was promoted to captain in the newly formed Corps of Engineers. He resigned from the army on June 7, 1781 and started a legal practice in Canterbury. As a Freemason he was initiated in a military lodge and became W. Master of Moriah Lodge, Connecticut.
He was known as a very energetic person with high ability. In 1788, he was a member of the Connecticut convention that ratified the United States Constitution. He was elected to the Connecticut General Assembly several times and in 1796 was commissioned brigadier general of militia.
He was a shareholder in the Connecticut Land Company, which had purchased land from the state of Connecticut the land in northeastern Ohio for $1,200,000 (Oho was then reserved to Connecticut by Congress, known at its first settlement as New Connecticut, and in later times as the Western Reserve.)
He was approached by the directors of the company in May 1796 and asked to lead the survey of the tract and the location of purchases. He was also responsible for the negotiations with the Native Americans living on the land. In June 1796, he set out from Schenectady, New York.
His party included fifty people including six surveyors, a physician, a chaplain, a boatman, thirty-seven employees, a few emigrants and two women who accompanied their husbands.
The expedition landed along the shore and on July 22, 1796, landed at the mouth of the Cuyahoga River where the city of Cleveland stands today.
There were on;y four settlers the first year, and by 1820 the population had grown to 150 inhabitants. Moses Cleaveland went home to Connecticut after the 1796 expedition and never returned to Ohio or the city that bears his name. He died in Canterbury, Connecticut, where he is also buried. Today, a statue of him stands on Public Square in Cleveland.
|
Jul 01 2017 : The Times of India (NaviMumbai)
Gynaecological cancers: What you really need to know
Surgical oncologist and robotic surgeon Dr Donald John Babu (MCh, FEBS, FICS, FAIS, FCPS, FIAGES, FMAS, MS) says that cancers of the uterus, ovary , cervix and vulva can each give rise to cancer. They are overgrowth in these organs, which are beyond control by general medicines. If left to themselves, they will grow and spread to different parts of the body leading to death. He answers questions related to gynaecological cancers...
How can one avoid gynaecological cancers?
Some have knowndiscovered causative factors. Others don't. Cervical cancer has a virus -Human Papilloma Virus (HPV) in 95 per cent of them. It generally spreads through multiple sexual contacts, multiple genital trau ma like child birth, poor genital hygiene etc. These are avoidable circumstances.Vaccines for them have also been developed, which are used in the age group of nine to 26 years for boys and girls.Girls who are older can receive them, in case they haven't had sexual exposure before Uterine cancers have risk factors like obesity , hypertension and diabetes but no direct correlation. Ovarian cancers have no particular risk factor other than age and sometimes being hereditary .Vulval cancers have same risk factor of cervix.
Is there anyway to discover them at an early stage? Is there screening for these cancers?
I would like to emphasise and reiterate the importance of screening here. Early stage cancers are 100 per cent curable. However, they need to be discovered early . We do have pap smears for cervical cancers, tumour markers and sonography for ovarian cancers, OPD pipette biopsies for uterine cancers along with sonography of the abdomen.These are available as screening packages for women above 50 years at MGM Hospital, Vashi.
What does treatment entail? Are they curable?
Fortunately , most gynaecological cancers are curable. And most are indeed discovered at an early stage if the woman reports to the doctor immediately on symptom arrival or regularly participates in screening. Ovarian cancers generally require surgery and chemotherapy . Uterine and cervical cancers are treated with a combination of surgery and radiothera py. We, at MGM Hospital, Vashi also offer robotic sur gery for patients who are conscious about their abdominal scars and desire early recovery from sur gery . Robotic surgery is a standard procedure for hys terectomies in the west and is utilised for benign and malignant pathologies of the uterus. Robotic surgeries also offer surgical accuracy , depth perception and less blood loss for the surgeon and patient.
Where: MGM Hospital,Vashi.
E-mail: donald@p53cancer clinic.com http:www.p53cancerclinic.com (*Procedures given are based on the expert's understanding of the said field)
Site best viewed in 1024 * 786 resolution
|
Next words of Jesus: Peace be with you
When Jesus appeared to Mary Magdalene, no one knew that he had risen. The sight of him, and recognition of who it was, came as a shock. The men did not believe her testimony, but the two men who decided to walk to Emmaus at least decided to talk about it and puzzle about it.
When they, too, recognized Jesus, they returned to Jerusalem with their testimony.
Everyone was happy and joyful with this confirmation of the good news–until Jesus himself showed up. Why would that dampen the mood so much?
Luke says they thought they saw a ghost. We can understand that. It’s not every day that people stand around talking about a dead man who’s up walking again and then he suddenly shows up, despite locked doors, of all things. So he invited them to touch him and asked for something to eat. They still didn’t entirely believe.
There’s more going on that Luke does not say. Peter had denied Jesus three times. Everyone else but John had deserted him and not had courage enough to show up at the cross. For three years, they had experienced his occasional frustration at their lack of understanding. They had seen his anger.
But they had never failed him before as they did after his arrest. And now here he is. What do we do now? What if he’s given up on us? Guilt and shame must have clouded their joy.
John testifies that Thomas was not present on that occasion and that he did not believe their testimony. He declared that seeing Jesus would not be enough. He could not believe until he touched the wounds.
Thomas seems like a gloomy Gus every time we meet him, but let’s not be too critical. Pessimism often causes people to give up on whatever hopes and desires they have. Thomas kept doggedly wanting to believe, even though his own innate pessimism was the only barrier that kept him from it.
The week after Easter must have been much longer for Thomas than for the others. At least they believed their own experience.
Only on the following Sunday did Jesus turn up again. Again they had locked the doors. Whatever all but Thomas believed about Jesus, they did not yet have the courage to declare the risen Christ publicly. Jesus invited Thomas to touch his wounds.
Did Thomas do so? Imagine his shame not only for what he did when Jesus was betrayed, but for doubting his friends’ testimony all week! Yet also imagine his joy as evidence of what he most wanted to believe appeared before him.
John does not say that Thomas indeed touched Jesus. He may have, or he may have been ashamed to. John also does not say that Thomas knelt in worship, either, but how could he not? He called Jesus “My Lord and my God.”
Significantly, when Jesus appeared he did not simply say, “Hello,” or “Greetings,” or any number of possible perfunctory salutations. He said, “Peace be with you.” That may have been a common enough greeting, but surely everyone remembered that he had said, “Peace I leave with you” on the night he was betrayed.
He told them they would all fall away. They did. Then he told them about his peace. He met them with that peace the next time he saw them, right after their greatest failures. Over the coming month, he would say things that would make them uncomfortable. He would scold them later. But not now.
Martin Luther took his sin so much to heart that he tried to atone for it by strict religious devotion. Of course, he failed. As a result, he hated the very concept of the righteousness of Christ. Then he discovered in Romans that God did not intend for him to atone for his own sin. All he needed to do was confess failure, accept Jesus’ sacrifice for it, and he would receive the righteousness of Christ as a gift.
What Luther learned from Scripture, the disciples learned from this early encounter with the risen Lord. God chooses to deal with sin not by punishing it and rejecting the sinner, but by accepting the sinner and offering peace. Sin has consequences, but rejection by God is not among them.
Peace be with you
Leave a Reply
|
Nyāya Substance Dualism
In a previous post, I went over an argument for the existence of God that was formulated by philosophers in the Nyāya tradition. Here my aim is to provide a brief summary of some Nyāya arguments for substance dualism, the view that mental and physical substances are radically distinct.
The categories of substance and quality were fundamental to Nyāya metaphysics. A substance is the concrete substratum in which qualities inhere. An apple, for instance, is a substance, and redness is a quality that inheres in it. Substances can be complex and made up of parts (like an apple) or simple and indivisible (like an atom).
Nyāya held that in addition to physical substances, there are non-physical ones. Our individual soul – that is, our Self – is a non-physical substance. Like atoms, individual souls are simple and indivisible, and hence eternal (since destroying an object is the same as breaking it up into its constituent parts, and simple substances do not have any constituent parts). Consciousness, and different conscious states like desires and memories, are qualities that inhere in the substantial Self.
The primary philosophical adversaries of Nyāya belonged to two different camps. The first was Cārvāka, which claimed that only physical substances exist, that the mind does not exist apart from the body, and that the self is reducible to the totality of the body and all its functions. The other was Buddhism, which rejects physicalism but denies the existence of the substantial Self. Buddhism replaces the idea of the Self with a stream of momentary causally connected mental states. Nyāya was engaged in a protracted series of debates with both Cārvāka and Buddhism. Versions of the arguments I summarize in this essay were developed and defended by Nyāya thinkers such as Vātsyāyana (5th century), Uddyotakara (7th century) and Udayana (10th century), among others.
Against Physicalism
Nyāya came up with a number of arguments against physicalism. The one I focus on here has interesting similarities to arguments found in contemporary debates within the philosophy of mind. It can be stated like this1:
(P1) All bodily qualities are either externally perceptible or imperceptible.
(P2) No phenomenal qualities are externally perceptible or imperceptible.
(C) Therefore, no phenomenal qualities are bodily qualities.
The argument is deductively valid, so let us examine the premises. As the term suggests, externally perceptible bodily qualities are features of the body that can be directly perceived by external agents. Color is an example of an externally perceptible quality. Everyone who can see me can see that the color of my body is brown. An imperceptible quality is a feature of the body that cannot be directly perceived, but can be inferred through observation and analysis. Weight was a common example used in Nyāya texts. You cannot directly perceive my weight, but if I stand on a weighing machine, you can know my weight by looking at the number displayed by the machine. P1 states that all physical qualities are exhausted by these two categories.
Let us movie on to P2. Phenomenal qualities are the features of conscious experience: the subjective, first person what-it-is-likeness to have an experience. The experience of color, pleasure, pain, desire, and memory are all examples of phenomenal qualities. P2 draws on the intuition that phenomenal qualities are essentially private.
To say that phenomenal qualities are not externally perceptible is to say that I cannot immediately know what it is like for you to have an experience. I have direct access to externally perceptible qualities of your body like color, but I don’t have direct access to your phenomenal qualities. I may be able to infer based on your behavior that you are in pain, but I don’t experience your pain in the immediate, first person manner that you do. The contemporary American philosopher Thomas Nagel made a similar point in his classic paper What Is it Like to Be a Bat? We may be able to observe how bats behave, and how their organs, brain and nervous system work, but we can’t know what it feels like, from the inside, to be a bat. Only a bat knows what it is like to be a bat.
If phenomenal qualities aren’t externally perceptible, perhaps they are imperceptible qualities like weight. But this is extremely implausible. Phenomenal qualities are not externally perceptible, but they’re clearly internally perceptible. The whole point is that I have direct perceptive access to phenomenal qualities – my conscious experiences are given to me in a basic and immediate fashion. Even if I don’t know that my experiences are veridical, I always know what the features of my own experience are. Thus, phenomenal qualities are not imperceptible.
Since phenomenal qualities are neither externally perceptible nor imperceptible, they are not physical qualities. If physicalism is the thesis that only physical substances and their qualities exist, and the above argument is sound, we must conclude that physicalism is false.
Against No-Self Theory
The above argument by itself does not get us to the kind of substance dualism that Nyāya favored. Buddhists, after all, are anti-physicalists, but they do not believe that the Self is an enduring substance that persists through time. Instead, Buddhists view a person as nothing more than a series of sequential causally connected momentary mental states. The 18th century Scottish philosopher David Hume, and more recently, the British philosopher Derek Parfit, came to roughly the same conclusion.
Again, the Nyāya canon has several arguments against the Buddhist no-Self theory, but I will touch on just two of them here. The first of these is that the Self is necessary to explain the first person experience of recollection or recognition. The intuition here is something like this: If I notice a tree and recognize that it is the same tree I saw a few days ago, there has to be a subject that was present both during the first experience and the second one for recollection to occur. Similarly, if the desire to eat a banana arises in my mind at t2 because I remember that I previously enjoyed eating a banana at t1, there has to be a subject that existed during the initial experience that occurred at t1, and persisted through time until the recollection at t2. Without the Self – a substance that endures through these different points in time – the experience of memory is a mystery.
The Buddhist response was that causal connections between momentary mental states could explain the phenomenon of memory. If the mental state at t1 is causally connected to the mental state at t2, that’s all that’s needed for the mental state at t2 to recall the experience at t1. The Nyāya rejoinder was that causal connections were not sufficient to account for how a mental event can be experienced as a memory. When I recognize a tree I saw few days ago, it isn’t just that an image of the previously perceived tree pops into my mind. Rather, my experience is of the form: “This tree that I see now is the same tree I saw yesterday.” In other words, my present experience after seeing the tree involves my recognition of the previous experience as belonging to myself. Similarly, my current desire to eat a banana is based on my recognition of the previous enjoyable experience of eating a banana as belonging to myself. One person does not experience the memory of another, and in much the same way, one mental state cannot remember the content of another. So a single entity that persists through time must exist.
The second argument for the Self takes for granted what we might call the unity of perception. Our perceptions aren’t a chaotic disjointed bundle despite the fact that they arise through different sense organs. There’s a certain unity and coherence to them. In particular, Nyāya philosophers drew attention to mental events that are characterized by cross-modal recognition. An example would be: “The table that I see now is the same table I am touching.” We have experiences that arise through different channels (in this case, my eye and my hand), but there must be something that ties these experiences together and synthesizes them to give rise to a unified cognitive event. In other words, the Buddhist no-Self theory might be able to explain the independent experiences of sight and touch, but for the object of both experiences to be recognized as one and the same, there must something else to which both experiences belong, and which integrates the experiences to give rise to the unified perception of the object. Again, it seems we must admit the existence of the Self.
Needless to say, all these arguments were (and remain) controversial. The debates between Buddhist and Nyāya philosophers got extremely complex over time. They involved increasingly fine-grained analyses of the phenomenology of recollection/recognition, and increasingly technical discussions on the metaphysics of causation. Similar debates took place between other orthodox Indian schools of thought that believed in the Self (Mīmāṃsā, Vedānta, etc.) and their Buddhist no-Self rivals. A good place to start for further reading on this subject would be the collection of essays in Hindu and Buddhist Ideas in Dialogue: Self and No-Self.
[1] The argument I’ve presented here is based on Kisor Kumar Chakrabarti’s formulation in Classical Indian Philosophy of Mind: The Nyāya Dualist Tradition.
An Introduction to Phenomenal Conservatism
Phenomenal Conservatism (PC) is a foundationalist theory of justification that can be applied to perception as well as the a priori. Michael Huemer formulates PC like this:
PC: If it seems to S that p, then, in the absence of defeaters, S thereby has at least some degree of justification for believing that p (Huemer 2007).
PC takes seemings to be the epistemically relevant mental states. Seemings are appearances that something is the case, such as the appearance of a desk in front of me. It seems to me that there is a desk in front of me. Seemings are propositional attitudes: it seems to S that P is the case. For it to seem to S that P, the proposition <P> must be the content of S’s seeming. But just because seemings are propositional attitudes, it doesn’t follow that they lack a phenomenology. Seemings have a feel of veridicality; they present their contents as if they were the case. In other words, seemings have assertive content. Contents that are presented to the subject assertively have a phenomenology of, for lack of a better descriptive term, truthiness.
PC is a form of internalism about justification, which is the view that justification supervenes on the mental states of the subject, or things that are epistemically accessible. To say that justification supervenes onto the mental or the accessible is to say that there cannot be a change in mental states or what is accessible without a change in justificational status. The version of PC that takes the supervenience base to be mental states without an accessibility requirement is called mentalism, and it can be seen as a form of reductionism about justification. The version of PC that takes the base to be epistemically accessible things is called accessibilism, and is a version of non-reductionism about justification. Mentalism can give a reductive analysis of justification in terms of properties of mental states, whereas accessibilism takes access to be a primitive, epistemic notion which cannot be reductively analyzed without circularity. PC can be formulated in either way, but I take it to be a hybrid because seemings are both mental states and intrinsically accessible to the subject.
PC can be construed as either weak or strong foundationalism. If it is taken to be a version of weak foundationalism, then seemings are not sufficient for fully justified beliefs based on them. Beliefs based on seemings, on this view, would have some justification, but not enough for full blown justification. Those beliefs must also be supported by other beliefs, or other epistemically relevant states. If PC is a version of strong foundationalism, then seemings are sufficient for fully justified beliefs. Beliefs based on seemings are fully justified, absent defeaters. Huemer’s version of PC can be seen as a hybrid, where some seemings may not be sufficient for full justification, while others are. The hybrid nature of Huemer’s version of PC can be seen in the, “at least some degree of justification” clause.
Justified beliefs can be defeated by various considerations. PC allows for defeat, which means that beliefs based on seemings can lose their fully justified status. For example, if I look at a pencil submerged in a glass of water, it seems to me that the pencil is bent. Lacking background knowledge about what happens when straight objects are submerged in water, I form the belief that the pencil is bent. I now have a belief that is at least partially justified. But then I pull the pencil out of the water and see that it is not actually straight. Puzzled, I search wikipedia for an explanation, and learn about what happens when pencils are submerged in water. My belief about the pencil being bent is now defeated by counter evidence.
In some future posts I will explore objections to PC, such as the problem of cognitive penetrability, the Sellarsian dilemma, and the problem of the speckled hen. I will also examine issues related to the nature of seemings, and whether seemings form a homogeneous class of mental states, or if there are distinct kinds of seemings. Finally, I will explore the connection between PC and ethical intuitionism.
Works Cited
Huemer, Michael. “Compassionate Phenomenal Conservatism.” Philosophy and Phenomenological Research 74.1 (2007): 30-55. Web.
Pragmatism and Two Forms of Naturalism: Guest Post by Danny Krämer
American Pragmatists and the first wave of Naturalism
Two forms of Naturalism
The core of Naturalism
Two pragmatist traditions
Two Arguments for Hedonism
In value theory, or axiology, there are two kinds of theory: monistic and pluralistic. Monistic theories posit one kind of intrinsic value, whereas pluralistic theories posit more than one. Hedonism is a monistic theory of value which posits pleasure as the single kind of intrinsic value.
There are two interesting ways of arguing for hedonism that I want to explore. First, there is the argument from moral disagreement. The second one is the evolutionary debunking argument. Both strategies trade on an alleged fact about pleasure, which makes them variants on a more general kind of argumentative strategy. The alleged fact that both trade on is that we are directly acquainted with pleasurable mental states. Pleasure, on this view, is a property of mental states (I won’t go into what sort of property here). Since we are directly acquainted with at least the phenomenal qualities of our occurrent mental states, and pleasure is a phenomenal quality of mental states, we are directly acquainted with pleasure.
Direct acquaintance can be spelled out in various ways, but for now let’s just take it as a factive relation between a subject and some property. The relation is factive because the property must actually exist and be accessible to the subject for that property to be a member of an acquaintance relation. You can’t be acquainted with something that doesn’t exist. Similarly, you can’t know something that isn’t true. To be directly acquainted with some property is to have a special epistemic perspective on that property. For example, being in pain is an acquaintance relation because subjects are in pain, and a particular subject’s pain is had by that subject, which means that no other subject can have that same pain.[1] The subject in pain has a privileged epistemic perspective with respect to her pain. She is directly acquainted with her pain, which means she does not need to make an inference to know that she is in pain, having it is sufficient. Others cannot have this privileged perspective on her pains, but rather they must infer that she is in pain from her behavior.
Before unpacking the first argument for hedonism, we need to consider the argument from moral disagreement:
1. In any moral disagreement, at least one party must be in error.
2. There is widespread moral disagreement.
3. If there is widespread error about a topic, we should retain only those beliefs about it formed through reliable processes.
4. If there is widespread error about morality, there are no reliable processes for forming moral beliefs.
5. There is widespread error about morality (from 1 and 2).
6. We should retain only those moral beliefs formed through reliable processes (from 3 and 5).
7. There are no reliable processes for forming moral beliefs (from 4 and 5).
8. We should give up all of our moral beliefs (from 6 and 7).[2]
The hedonist responds to this argument by denying 4. There is a reliable process of forming moral beliefs, which is the process of phenomenal introspection. Engaging in phenomenal introspection reveals that we are directly acquainted with certain phenomenal properties, such as pleasure. Since we are directly acquainted with pleasure, we can see that pleasure is good. According to Neil Sinhababu, “Just as one can look inward at one’s experience of lemon yellow and appreciate its brightness, one can look inward at one’s experience of pleasure and appreciate its goodness.”[3] There is a link between the goodness of pleasure and badness of pain, and the reasons why we morally praise and blame people. When somebody tortures an innocent person, a main reason we consider the torturer bad is because we know that pain is bad, and inflicting it for no reason is also bad. We morally blame the torture for inflicting gratuitous pain, which means that there is moral disvalue in pain (and ipso facto, moral value in pleasure). So, hedonism about moral value is true.
The second argument goes like this. Our moral judgment and belief formation processes evolved under conditions which did not select for their reliability. We should not believe things produced by unreliable processes. So, we should suspend our moral beliefs and refrain from moral judgments. However, we are directly acquainted with pain and pleasure, and by virtue of that acquaintance we know that pain is intrinsically bad and pleasure is intrinsically good. The origins of those beliefs do not undermine their reliability. So, pain is intrinsically bad and pleasure is intrinsically good. Assuming no other kind of moral belief can be saved from debunking this way, it follows that we should be hedonists.
Peter Singer and Katarzyna De Lazari-Radek provide a thought experiment to back up the argument:
Thalia Wheatley and Jonathan Haidt hypnotized subjects to feel disgust when they read an arbitrarily chosen word – in this case, the word ‘often’. The students then read the following,
‘Dan is a student council representative at his school. This semester he is in charge of scheduling discussions about academic issues. He often picks topics that appeal to both professors and students in order to stimulate discussion.’
Students who had been primed under hypnosis to feel disgust at the word ‘often’ were then asked to judge whether Dan had done something wrong. A third of them said that he had. The negative moral judgment was, of course, an illusion, created by hypnosis, and it gives us no reason at all to believe that Dan’s conduct was wrong. Presumably once the experiment was over, and the students had been debriefed, they would agree that Dan had done nothing wrong. Now suppose that the students had been hypnotized to believe that when they read the word ‘often’ they would develop a blinding headache. Soon after being given information containing the headache triggering word, they held their heads, moaned, asked for analgesics, and tried to find somewhere quiet to rest. Asked to rate how they are now feeling on a scale rating from ‘very bad’ to ‘very good’, they rated the experience as ‘very bad’. After the experience was over and they had been debriefed, would they change their judgment that they had a very bad experience because the judgment was induced by hypnosis? Presumably not.[4]
The point is that they were directly acquainted with the bad experience (headache pain), and regardless of the origins of the judgments made about the badness of their experiences, they were justified in believing that their experiences were very bad. Direct acquaintance is still doing the heavy lifting here, because it is by virtue of it that the students are still justified in maintaining that their judgments were reliable. In the first experiment, the students were not directly acquainted with the alleged badness of Dan’s actions, so there was nothing there to defeat the genetic defeater of their judgments (that being that they were formed by hypnosis). In the case of pain, direct acquaintance becomes a defeater-defeater, which means that it undermines the unreliable origins of judgments formed on its basis. Presumably, we can run a similar thought experiment about pleasurable experiences as well. So, the goodness of pleasure and the badness of pain are not undermined by evolutionary considerations, whereas other evaluative judgments are. So, hedonism is true.
Both of these arguments are interesting in their own right. But what I find most interesting is that they rely on direct acquaintance as a means of arguing for hedonism. It seems like arguments for hedonism will typically take this form: Judgments about the value of things with which we are not acquainted are subject to epistemically unacceptable doubt. Judgments about the value of things with which we are acquainted are not subject to epistemically unacceptable doubt. Judgments that are subject to unacceptable doubt are not justified. Hedonistic judgments are judgments about the value of things with which we are acquainted. So, hedonistic judgments are justified.[5] The way I would suggest challenging this kind of argument is by questioning whether direct acquaintance is the only way to mitigate skeptical doubt. Perhaps intuitions could do the job as well, which would open up the possibility of intuitionist ethics (which tends not to be hedonistic).
[1] Sameness being numerical identity in this case.
[2] Cf. Sinhababu, The Epistemic Argument for Hedonism.
[3] Ibid.
[4] (Singer and Lazari-Radek 267-268).
[5] Presumably, the hedonist’s definition of ‘pleasure’ will cover other phenomenal states, like aesthetic appreciation, otherwise there could be other phenomenal states that seem to have intrinsic value that are not hedonic.
Works Cited
Lazari-Radek, Katarzyna De., and Peter Singer. The point of view of the universe: Sidgwick and contemporary ethics. Oxford: Oxford U Press, 2014. Print.
Sinhababu, Neil, The Epistemic Argument for Hedonism.
Are Facts Socially Constructed?
I enjoy watching YouTube videos. The sorts of videos I usually enjoy discuss academic topics I’m interested in, like philosophy. I’m especially fond of videos that put forward ideas that are commonly regarded as radical, or even indefensible. I like the debates that those videos engender among the video-makers who discuss those topics. In the vein of being a fan of YouTube debates, I want to add my two cents to a topic that has been gaining traction within certain communities. The topic is social construction. In particular, I want to discuss a thesis put forward by Dr. Kristi Winters in several videos. Her thesis is that facts qua facts are socially constructed entities. I will put the links to the videos I reference below this post, and I will add links to the end notes that direct to the times I reference.
What are facts? There is a lot of debate in analytic philosophy about facts, such as their internal structure, their nature, and how we should represent them formally. Typical, contemporary views take facts to be true truth-bearers, obtaining states of affairs, or some kind of entity in which individual objects exemplify properties and stand in relations.[1] While there is a lot of debate going on in the literature, one thing that isn’t hotly debated is whether all facts are socially constructed. Save for various kinds of idealism,[2] most mainstream views don’t take all facts to be dependent on minds.[3] So, facts can be either mind-dependent or mind-independent.
I’ll throw out a particular view about the ontology of facts to get things rolling. What this view is meant to do is show that there are quite intuitive points of view on the nature of facts which don’t take them to be essentially socially constructed, despite Dr. Winters’ implication to the contrary.[4] I will take facts to be obtaining states of affairs. To obtain is just to be the case, or to be actual.[5] A state of affairs is a distribution of properties over individuals that (can) stand in relations to other individuals. There can be states of affairs with just one individual that exemplifies some properties, and there can be states of affairs where several individuals exemplify properties and stand in certain kinds of relations to each other.
Dr. Winters likes the example of Pluto being reclassified as a dwarf planet. She believes that this demonstrates that facts about planets are socially constructed, because whether or not something is a planet is partly dependent on the classificatory conventions adopted by astronomers, and those conventions are products of social interactions among astronomers. If they changed the classificatory conventions, facts about the number of planets would change. So, facts about the number of planets are socially constructed. I hope I am not misrepresenting Dr. Winters with this reconstruction of her argument. I will now respond to this reconstruction.
If we are going to be scientific realists, then we should think that astronomers produce theories about things that exist independently of mental activity of any sort.[6] In other words, as realists, we should think of astronomy as developing an ontology of a certain aspect of the world we inhabit, namely the realm of celestial bodies and events. What their theorizing aims to do is reveal astronomical facts, which are constituted by properties distributed over individuals.
That astronomers changed the classification convention for planets such that it now excludes Pluto is one thing. That Pluto has the properties that qualify it as a dwarf planet rather than a planet within the classification convention is not socially constructed, but rather it is a mind-independent fact. The fact that Pluto has properties (p1…pn) is not dependent on the theories that astronomers believe, or any theories at all.
Furthermore, the fact that astronomers have such classification conventions is itself not socially constructed. That fact is just the distribution of properties over individual astronomers, wherein that distribution determines or grounds those naming conventions. That astronomers accept a classification convention is a fact about astronomers, and not the content of the classification convention. So, that fact is not constructed by the social processes that determined the contents of the classification convention.
If Dr. Winters objects at this point, I must ask, if the fact that astronomers accept some classification convention is mind-dependent, then what about the fact that that fact is mind-dependent? Is that fact also mind-dependent? If so, we end up with an ascending order of mind-dependency that either terminates in some super-mind, or keeps going to infinity. Think about it, if the second-order fact about the distribution of properties over astronomers is itself mind-dependent, on whose mind does it depend? The individual astronomers taken as a collective? What about the third-order fact that the second-order fact is mind-dependent? This process can repeat to infinity, and the minds of astronomers are not capable of housing this many facts. Unless we want to formulate this as some weird argument for the existence of God based on the social construction of facts, something’s gotta give. As far as I’m concerned, what’s gotta give is the implausible idea that all facts are socially constructed.
Now, I could just be misinterpreting Dr. Winters by imputing onto her ontological commitments about facts when she’s just making claims about epistemology.[7] However, she also claims that knowledge is constructed via social processes. Knowledge decomposes on analysis into various conditions, and any mainstream analysis includes truth as a necessary condition.[8] If knowledge is constructed, then presumably what that knowledge is about must also be constructed, otherwise there isn’t much to the claim that (some) knowledge is socially constructed.
Dr. Winters also talks about Kant’s noumenal/phenomenal distinction. I’m not sure how that’s supposed to help make sense of the social construction of facts. Perhaps she is adopting transcendental idealism, and she thinks that humans have a conceptual manifold along with pure intuitions of space and time, and these shape our experience of the world. Our experience of the world expresses itself within the conceptual boundaries allowed for by our constitution. The phenomenal realm is just what we experience as it is shaped by our mental constitution, and the noumenal realm is beyond our conceptual grasp. My worry here is that this is probably false. Even if it isn’t false, there are interpretations of the phenomenal/noumenal distinction that aren’t purely epistemic. Dual aspect views and two worlds views allow for metaphysical understandings of the phenomenal realm.[9] So, it isn’t obvious that Dr. Winters, if she is embracing transcendental idealism, is skirting by any ontological commitments when she says that facts are socially constructed.
Another worry is we would need an argument for why intersubjectivity determined by shared conceptual manifolds and pure intuitions of space/time entails that we ought to embrace a social constructionist ontology of facts. Up to now, we haven’t been provided with one.
So, if some truths are constructed, we’re back into the territory of metaphysics rather than epistemology. Whether or not truth-bearers are true or false must be sensitive to how the truth-bearer-independent world is at a given time.[10] Whatever aspect of the world that true truth-bearers must be sensitive to will be socially constructed, if knowledge of those truths is itself constructed. So, this isn’t just about epistemology if we take knowledge to be socially constructed. Dr. Winters could have stayed in the realm of epistemology by constraining talk of social construction to issues of justification and warrant in various social spheres. Perhaps standards of testimony are based on social norms, and those norms bias those standards in ways that are conducive to testimonial injustices against marginalized groups. However, she did not restrict herself to the realm of justification and warrant, so she does take on ontological commitments.
What Dr. Winters ought to do is check out the literature on the ontology of facts. She can then adopt a position that allows for some facts being socially constructed, such as facts about gender and race, perhaps. I actually recommend thinking about social construction in terms of grounding and dropping talk of facts entirely. So, to be socially constructed is to be grounded in distinctive social patterns.[11] The trick, then, is uncovering which social patterns are salient when considering particular social constructs such as gender or borders.
[1] Mulligan and Correia 2013.
[2] Some forms of idealism may allow for facts that are mind-independent, such as transcendental idealism. Transcendental idealism will come up again in this post.
[3] Mind-dependence is a necessary (albeit probably not sufficient) condition for something being socially constructed.
[4] Cf. 16:43-17:06. The way she states it implies that it’s quite obvious, given a certain amount of reflection on the nature of science, that facts are constructed by the social processes embedded within the institution of science.
[5] I’m leaving questions of modality aside. Assume that I’m talking facts as things that obtain in the actual world.
[6] Dr. Winters may not accept realism, especially if she has sympathies for transcendental idealism.
[7] Cf. 21:06-23:16 for her clarification about ontology and epistemology.
[8] Setting aside the knowledge-first theorists, who presumably also take knowledge to be factive, just not subject to analysis into other concepts.
[9] Cf. Stang 2016.
[10] Besides claims about truth-bearers, but let’s set that complication aside.
[11] Schaffer 2016.
Works Cited
Mulligan, Kevin and Correia, Fabrice, “Facts”, The Stanford Encyclopedia of Philosophy (Spring 2013 Edition), Edward N. Zalta (ed.), URL <https://plato.stanford.edu/archives/spr2013/entries/facts/>.
Schaffer, Jonathan. “Social Construction as Grounding; Or: Fundamentality for Feminists, a Reply to Barnes and Mikkola.” Philosophical Studies (2016). Web.
Stang, Nicholas F., “Kant’s Transcendental Idealism”, The Stanford Encyclopedia of Philosophy (Spring 2016 Edition), Edward N. Zalta (ed.), URL =<https://plato.stanford.edu/archives/spr2016/entries/kant-transcendental-idealism/>.
Dr. Winters’ Videos I Address
What Sargon of Akkad Doesn’t Know About Social Constructs
A Chat With Prof. Philip Moriarty on YouTube Atheism
Why The Presentation Argument for Property Dualism Fails
In my post, The Presentation Argument for Property Dualism, I examined an argument that targets property reduction and attempts to conclude that for any purported reduction of one property to another, there is a residual appearance left over. The residual appearance requires some property to which it corresponds, so any attempt at property reduction actually generates more properties.
One objection is that this argument relies on property intensionalism.[1] What this means is that we individuate properties by how we think of them rather than by extension. The argument for property dualism seems to require this principle of individuation:
(PI) If it’s not a priori that <F> and <G> are coextensive, then F and G are not identical.[2]
The principle says that if it isn’t a priori knowable that the concepts <F> and <G> have the same extensions, then the properties F and G are not identical. PI is at home with the view that the epistemology of properties is wholly a priori.[3] In essence, if we have two concepts that on reflection seem not to have (necessarily) identical extensions, then they pick out distinct properties. We can know which properties exist by analyzing our concepts of them.
The problem with PI is that it just seems like a form of property intensionalism. After all, why think that our a priori reflection on the extensions of our concepts reliably yields facts about properties that exist independently of those concepts? It seems like a massive coincidence without some dependence between properties and our concepts of them. But any dependence between concepts and properties that allows one to derive PI runs the risk of making properties unacceptably mind-dependent.[4] By unacceptably mind-dependent, I mean there must be some metaphysical dependency between properties and concepts such that it is more than just a coincidence that a priori reflection on concepts produces knowledge about properties in a reliable manner. Such metaphysical dependency is either a God-given pre-established harmony between concept and property, or some kind of idealism about properties, which ultimately amounts to idealism about almost everything.
The proponent of the presentation argument could respond by saying that PI and property intensional are conceptually distinct. One could maintain PI without embracing the anti-realist sounding doctrine of property intensionalism, as is possible in my example of pre-established harmony. Perhaps proponents of the presentation argument could just say that our concepts reliably pick out properties that are not themselves individuated by those concepts.[5]
For the view that concepts reliably pick out properties that aren’t individuated by those very concepts to work in the presentation argument, the appearance properties must not be individuated epistemically, but rather metaphysically.[6] However, appearance properties seem to be individuated epistemically. After all, appearances are the wheelhouse of the internalist epistemologist, and as such they seem to be subject to intensional individuation if anything is. So, even if we grant that PI is compatible with an extensionalist individuation scheme for properties, the presentation argument still seems to rely on things whose very nature entails intensional individuation conditions.[7]
Given that the presentation argument’s reliance on PI is part and parcel with intensional individuation conditions for (at least) appearance properties, there is another problem proponents of the argument must face. An extensionalist about property individuation holds to objective individuation conditions for properties. For example, the dispositional properties revealed by modern physics are individuated objectively; they are not individuated by something like PI. The property intensionalist is going to accept the same properties as the extensionalist, since the extensionalist typically endorses a scientific methodology for discovering properties. That scientific methodology will yield results that both the intensionalist and extensionalist have independent reasons to accept.[8] But the extensionalist has an advantage here, since all the properties both she and the intensionalist can agree to are those properties that we consider causally efficacious. All the causal work in the world can be done in virtue of the properties that a pure extensionalist individuation scheme is committed to. The intensionalist is going to have additional properties, and those properties are either causally inert or causally efficacious. If they are causally efficacious then they causally overdetermine the events they enter into alongside the extensionally individuated properties. If they are causally inert, then they are committed to epiphenomenal properties.
The first horn of the dilemma assumes that causal overdetermination is theoretically vicious, but there are reasons to doubt this.[9] If the intensionalist has independent reasons to think that causal overdetermination is ok, or a good thing to believe in, embracing the first horn shouldn’t bother her. The second horn is more problematic, though. We have good reasons to think that mental states are able to enter into causal relations. The appearances cited in the presentation argument seem to cause proponents of the argument to advance it, and they cause me to reflect on it, as well as property individuation. If those appearances weren’t there, theorists would lack motivation to formulate the presentation argument. Even less plausibly, embracing the second horn would entail that appearances are not among the things that cause us to discuss appearances. I, for one, am not brave enough to accept such a result.
So, the proponent of the presentation argument must accept PI, and thereby is either committed to pre-established harmony between concepts and properties, a massive coincidence, or idealism (anti-realism) about properties. If the proponent attempts to disavow property intensionalism yet hold to PI, she will find herself lapsing back into property intensionalism once she introduces appearance properties into the mix. The proponent is also committed to the same properties as the extensionalist, as well as many more properties individuated intensionally. But, there are plausible reasons to think that the causal work in the world is done by the properties we individuate extensionally. So, the proponent of the argument must either embrace causal overdetermination, or epiphenomenalism about appearances. I argued that the latter is less plausible than the former, so the proponent must adopt overdetermination. At this point in the dialectic, the proponent must give us good, independent reasons to think overdetermination obtains in the causal order. Until then, we remain at liberty to deny the conclusion of the presentation argument, and remain physicalists.
End Notes
[1] Howell 104-105.
[2] Ibid 105.
[3] Ibid 106
[4] Ibid 106-107
[5] Ibid 107-108.
[6] Ibid 108.
[7] Ibid 108-109.
[8] There’s a strong case to be made for science as the most reliable way for detecting many if not all properties that are instantiated, and that case can be made independently of the individuation debate.
[9] Cf. Sider 2003.
Works Cited
Howell, Robert J. Consciousness and the Limits of Objectivity: The Case for Subjective Physicalism. Oxford: Oxford UP, 2013. Print.
Sider, Theodore. “What’s So Bad About Overdetermination?” Philosophy and Phenomenological Research 67.3 (2003): 719-26. Web.
The Presentation Argument for Property Dualism
In this post I am going to lay out an argument for property dualism called The Presentation Argument (TPA). TPA is no longer a prevalent argument for property dualism used by philosophers. Max Black originally formulated it, J.J.C. Smart dealt with it, and Stephen White has recently defended it (Howell 103; cf. White 2010 & Smart 1971).
According to Smart,
“. . . it may be possible to get out of asserting irreducible psychic processes, but not out of asserting the existence the existence of irreducible psychic properties. For suppose we identify the Morning Star with the Evening star. Then there must be some properties which logically imply that of being the Morning Star, and quite distinct properties which entail that of being the Evening Star. Again, there must be some properties (for example, that of being a yellow flash) which are logically distinct from those in the physicalist story.
Indeed, it might be thought that the objection succeeds at one jump. For consider the property of “being a yellow flash.” It might be seem that this property lies inevitably outside the physicalist framework . . .” (Smart 63).
The point being made is that reduction works for two seemingly distinct things or objects, but when it comes to properties, it seems problematic. Smart’s example shows that reduction works when discussing the Morning and Evening Stars, since they are both Venus. But there are distinct properties of Venus which serve as the truth conditions for propositions about the Morning Star, and those are not the same properties which serve as the truth conditions for propositions about the Evening Star. So there has been no reduction of properties, but only things. Smart makes the point in terms of psychic properties and processes, where “processes” seemingly denotes types of things.
When applied to the case of reducing the mind to something more fundamental, such as the physical, TPA becomes more salient. Consider reducing some token instance of a mental property to a token instance of a physical property of the brain. The property token of pain would be reduced to the property token of c-fiber firing. But there is still the appearance of pain, the qualitative feel, which seems quite distinct from the property token of c-fiber firing, or any of its properties. In other words, the appearance of multiple properties is explained by the existence of multiple properties that account for those appearances (Howell 104). In the case of informative property identity statements, it appears as if the number of properties increases (Howell 104).
So, attempting to reduce mental properties to physical properties will leave an appearance residue that must be accounted for by more properties. If those properties are construed as physical, then there will still be more appearances left unaccounted for. Reducing the appearance of pain to some property of c-fiber firing will leave the appearance of the appearance of pain which itself must be reduced to something physical, or be accounted for by properties that do not appear to be physical. If they’re reduced to something physical, you get another iteration of the problem, thus adding more physical properties to your ontology without fully explaining the appearances.
TPA has it that any attempt at property reduction produces more properties. In the case of the mental, there will either be unexplained appearances along with a very large (infinite?) number of physical properties of the brain, or appearances that have non-physical properties as their grounding. So, physicalists who attempt to reduce seemingly mental properties to something more fundamental actually bloat their ontology with appearance properties which are fundamentally mental, or with physical properties and unexplained appearances.
What should be noted is that this is not a unique problem for physicalism about mental properties. Rather, it is a problem for any attempt at property reduction, although what generates it seems to be closely tied to considerations about phenomenal states (Howell 2013). In the case of the Morning Star and the Evening Star, reducing the properties determining the truth conditions for propositions about the Morning Star to those determining the truth conditions about the Evening Star (or vice versa) leaves an appearance residue which will need to be explained. There is nothing overtly mental about this iteration of the problem, and it can generate its own version of TPA. However, note that it still relies on appearances as that which needs to be explained. So, it seems fundamentally tied to considerations about phenomenology.
In a future post, I will present an objection to TPA in all its forms. In the meantime, let me know what you think about this argument in the comments section, or tell me if I’ve been unclear in my presentation of the argument.
Works Cited
Howell, Robert J. (2013) Consciousness and the Limits of Objectivity: The Case for Subjective Physicalism. Oxford: Oxford University Press.
Smart, J.J.C. (1971) “Sensations and Brain Processes,” in David M. Rosenthal (ed.), Materialism and the Mind Body Problem. Englewood Cliffs: Prentice Hall.
White, Stephen. (2010) “The Property Dualism Argument,” in George Bealer and Robert Koons (eds), The Waning of Materialism: New Essays. New York: Oxford University Press.
|
Private and Public Schools: Cooperation or Competition.Reportar como inadecuado
Private and Public Schools: Cooperation or Competition. - Descarga este documento en PDF. Documentación en PDF para descargar gratis. Disponible también para leer online.
This papers explores the relationship between private and public schools. It challenges the assumption that competition between the private and public sectors is desirable and argues for a cooperative model in which public and private schools work together to educate children. Each sector has strengths that can help the other. These strengths include an emphasis on focused academic programs, communal organization, inspirational idealism, and decentralized governance. Increased cooperation would ease some of the recurrent problems in schools, such as private schools' relatively limited variety of electives and public schools' lack of social mobility where the poor are more likely to be encouraged to exit at a earlier age, are more likely to pursue weaker educational programs, and are more likely to attend less prestigious schools. Also, private schools typically have stronger cultural bonds than public schools since the former are generally smaller, choice-based, and more stable. Both schools could also improve their inspiration ideology, particularly public schools where teachers are hesitant to teach about values and beliefs, mostly because they have not been trained in these areas. Likewise, religious schools could learn from public schools the importance of trained leadership and professional support for administrators in school governance. Contains 21 references. (RJM)
Descriptors: Educational Change, Educational Cooperation, Educational Philosophy, Elementary Secondary Education, Foundations of Education, Private Schools, Public Schools
Autor: Denig, Stephen J.
Documentos relacionados
|
Category Archives: Septic System Design
• Sufficiency of Septic System
Is The Existing System Sufficient For The House?
Two factors determine whether a properly constructed septic system is able to effectively take care of all the wastewater coming from a household. These are 1) the size of the septic tank and 2) a properly constructed distribution system and its corresponding leach field so designed that it can treat the incoming wastewater.
How can you identify if the septic system in place is large enough for the home? Georgia’s current policies need a septic tank to be a minimum of 1,000 gallons for a home having up to three bedrooms. For each added bed room over three you would add 250 gallons to the required tank size. The final number should then be increased once more by a minimum of 50 % if the house has a waste disposal unit, because that significantly adds to the amount of solid material the system should process.
Ground Filtration
In the 2nd phase of the treatment process the wastewater either flows or is pumped to a distribution system of perforated pipes buried in gravel-lined trenches in an absorption field. How big a field is needed, and where should it be positioned, to effectively treat the effluent the soil gets? That is identified primarily by four aspects: (1) how fast the soil can take in water; (2) the depth of the groundwater on the site (and any seasonal depth changes); (3) just how much water the system is anticipated to deal with every day; and (4) the topography of the land. The data from these 4 aspects are utilized to determine the total size of the drain field that is required for a particular property. Let’s talk about these factors in more specificity.
Soil Percolation Rates.
You have probably heard house inspectors or others refer to the “percolation rate” of soil. That term describes how quick water can be taken in by the soil, revealed as the time it takes for water in a test hole to decrease in level by one inch (minutes/inch). Soil engineers (or other persons accredited to do so by the County) can identify the percolation rate of the soil in the leach field.
Soil Test
soil test
Percolation rates might vary between 5 inches/ minute and 90 inches/minute and still be within guidelines, depending upon other factors. The faster the soil drains, the larger the trench location (drain field) needs to be. This is since fast-draining soil is less dense, and has less ability to effectively filter huge quantities of wastewater. By enhancing the general size of the drain field, each square foot of ground is required to do less filtration and the wastewater can be adequately filtered by the larger amount of ground before reaching any groundwater.
Depth of Underground Water Table.
A soil engineer also identifies the minimum depth of the underground water table on the land. It must be identified that there is an adequate layer of soil in between it and the distribution pipelines to effectively filter out bacterial and viral pollutants before the wastewater rejoins the groundwater.
Expected Water Usage.
The size of the home, for figuring out the size of a septic system, is measured by how many bedrooms there are and the existence or absence of a garbage disposal system as discussed above. That will identify how many gallons each day a system is likely to process each day.
The presence of rocks, trees, other homes, or neighboring waterways or wells have to likewise be thought about due to DHS guidelines. For example, drainfields have to be at least 100 feet from drinking water sources, 50 feet from streams or ponds, and 10 feet from water lines. If the property owner is thinking about purchasing equipment that uses well water, they will obviously wish to be particular that the leachfield was effectively laid out and constructed in order to protect their drinking water.
As you can determine, this is a complicated process. Now that you have an idea of how a septic system works and the amount of information needed to plan for efficient processing of the wastewater.
• Ground Filtration
Phase II – Filtration
In the first post in this series, the first stage of the treatment process was discussed. Basically, the concept is that the wastewater from the plumbing systems exits the house via a pipe and through the power of gravity, flows into the septic tank. Over a relatively short period of time, the waste becomes separated into liquids and solids. After this the second stage of treatment takes place and what follows is an explanation of this second leg.
For the second stage of the treatment procedure (the filtration process) the effluent flows from the septic tank to the leach (absorption) field where it is eventually soaked up and dealt with by the dirt. If the absorption area is uphill from the septic system, the water initially moves right into a separate storage tank called a dosing storage tank. A pump then relocates the liquid to the distribution system in the absorption field for handling by the dirt. If no pump is required, the effluent will merely leave the septic tank (via a pipeline developed to permit only the effluent to leave), and will then proceed via a pipeline to the absorption field. A typical absorption field houses a system of perforated pipelines buried in trenches. The bottom of the trenches are filled with crushed stones or a similar product to ensure that the pipelines do not become obstructed and to allow for equal distribution of the wastewater into the dirt. As the water “percolates” down through the ground, the soil itself functions as a filter removing damaging bacteria, viruses, etc. from the effluent, prior to it eventually entering the underground water system.
There are numerous designs that can be utilized for the absorption field. Numerous ones consist of specific trenches as described in the previous paragraph, although they may be set in place in different ways as needed by the topography. Some systems might make use of a seepage pit instead, where the effluent empties into a large pit with a perforated or open-jointed cellular lining which permits the effluent to seep into the surrounding ground. These generally call for a lot less land area, however are only a good idea when normal absorption areas are not viable and also wells are not threatened. A specific home owner (or potential purchaser) ought to know specifically how the particular system on a lot is outlined, how it runs, as well as how to ideally keep it.
This completes the basics of a septic system.
|
Around 1920, following the discovery of the Hertsprung-Russell diagram still used as the basis for classifying stars and their evolution, Arthur Eddington anticipated the discovery and mechanism of nuclear fusion processes in stars, in his paper The Internal Constitution of the Stars.[22][23] At that time, the source of stellar energy was a complete mystery; Eddington correctly speculated that the source was fusion of hydrogen into helium, liberating enormous energy according to Einstein's equation E = mc2. This was a particularly remarkable development since at that time fusion and thermonuclear energy, and even that stars are largely composed of hydrogen (see metallicity), had not yet been discovered.[non-primary source needed]
In 1925 Cecilia Helena Payne (later Cecilia Payne-Gaposchkin) wrote an influential doctoral dissertation at Radcliffe College, in which she applied ionization theory to stellar atmospheres to relate the spectral classes to the temperature of stars.[24] Most significantly, she discovered that hydrogen and helium were the principal components of stars. Despite Eddington's suggestion, this discovery was so unexpected that her dissertation readers convinced her to modify the conclusion before publication. However, later research confirmed her discovery.[25]
By the end of the 20th century, studies of astronomical spectra had expanded to cover wavelengths extending from radio waves through optical, x-ray, and gamma wavelengths.[26] In the 21st century it further expanded to include observations based on gravitational waves.
Observational astrophysics
The majority of astrophysical observations are made using the electromagnetic spectrum.
The topic of how stars change, or stellar evolution, is often modeled by placing the varieties of star types in their respective positions on the Hertzsprung-Russell diagram, which can be viewed as representing the state of a stellar object, from birth to destruction.
Theoretical astrophysics
See also
1. ^ Keeler, James E. (November 1897), "The Importance of Astrophysical Research and the Relation of Astrophysics to the Other Physical Sciences", The Astrophysical Journal, 6 (4): 271-288, Bibcode:1897ApJ.....6..271K, doi:10.1086/140401, [Astrophysics] is closely allied on the one hand to astronomy, of which it may properly be classed as a branch, and on the other hand to chemistry and physics.... It seeks to ascertain the nature of the heavenly bodies, rather than their positions or motions in space-what they are, rather than where they are.... That which is perhaps most characteristic of astrophysics is the special prominence which it gives to the study of radiation.
3. ^ a b "Focus Areas - NASA Science". nasa.gov.
4. ^ "astronomy". Encyclopædia Britannica.
5. ^ Lloyd, G.E.R. (1968). Aristotle: The Growth and Structure of His Thought. Cambridge: Cambridge University Press. pp. 134-5. ISBN 0-521-09456-9.
9. ^ Westfall, Richard S. (1980), Never at Rest: A Biography of Isaac Newton, Cambridge: Cambridge University Press, pp. 731-732, ISBN 0-521-27435-4
11. ^ Ladislav Kvasz (2013). "Galileo, Descartes, and Newton - Founders of the Language of Physics" (PDF). Institute of Philosophy, Academy of Sciences of the Czech Republic. Retrieved .
12. ^ Case, Stephen (2015), "'Land-marks of the universe': John Herschel against the background of positional astronomy", Annals of Science, 72 (4): 417-434, Bibcode:2015AnSci..72..417C, doi:10.1080/00033790.2015.1034588, The great majority of astronomers working in the early nineteenth century were not interested in stars as physical objects. Far from being bodies with physical properties to be investigated, the stars were seen as markers measured in order to construct an accurate, detailed and precise background against which solar, lunar and planetary motions could be charted, primarily for terrestrial applications.
13. ^ Donnelly, Kevin (September 2014), "On the boredom of science: positional astronomy in the nineteenth century", The British Journal for the History of Science, 47 (3): 479-503, doi:10.1017/S0007087413000915
14. ^ Hearnshaw, J.B. (1986). The analysis of starlight. Cambridge: Cambridge University Press. pp. 23-29. ISBN 0-521-39916-5.
15. ^ Kirchhoff, Gustav (1860), "Ueber die Fraunhofer'schen Linien", Annalen der Physik, 185 (1): 148-150, Bibcode:1860AnP...185..148K, doi:10.1002/andp.18601850115
16. ^ Kirchhoff, Gustav (1860), "Ueber das Verhältniss zwischen dem Emissionsvermögen und dem Absorptionsvermögen der Körper für Wärme und Licht", Annalen der Physik, 185 (2): 275-301, Bibcode:1860AnP...185..275K, doi:10.1002/andp.18601850205
17. ^ Cortie, A. L. (1921), "Sir Norman Lockyer, 1836 - 1920", The Astrophysical Journal, 53: 233-248, Bibcode:1921ApJ....53..233C, doi:10.1086/142602
18. ^ Jensen, William B. (2004), "Why Helium Ends in "-ium"" (PDF), Journal of Chemical Education, 81: 944-945, Bibcode:2004JChEd..81..944J, doi:10.1021/ed081p944
19. ^ Hetherington, Norriss S.; McCray, W. Patrick, Weart, Spencer R., ed., Spectroscopy and the Birth of Astrophysics, American Institute of Physics, Center for the History of Physics, archived from the original on September 7, 2015, retrieved 2015
20. ^ a b Hale, George Ellery, "The Astrophysical Journal", The Astrophysical Journal, 1 (1): 80-84, Bibcode:1895ApJ.....1...80H, doi:10.1086/140011
21. ^ The Astrophysical Journal 1(1)
22. ^ Eddington, A. S. (October 1920), "The Internal Constitution of the Stars", The Scientific Monthly, 11 (4): 297-303, JSTOR 6491
23. ^ Eddington, A. S. (1916). "On the radiative equilibrium of the stars". Monthly Notices of the Royal Astronomical Society. 77: 16-35. Bibcode:1916MNRAS..77...16E. doi:10.1093/mnras/77.1.16.
25. ^ Haramundanis, Katherine (2007), "Payne-Gaposchkin [Payne], Cecilia Helena", in Hockey, Thomas; Trimble, Virginia; Williams, Thomas R., Biographical Encyclopedia of Astronomers, New York: Springer, pp. 876-878, ISBN 978-0-387-30400-7, retrieved 2015
26. ^ Biermann, Peter L.; Falcke, Heino (1998), "Frontiers of Astrophysics: Workshop Summary", in Panvini, Robert S.; Weiler, Thomas J., Fundamental particles and interactions, Frontiers in contemporary physics, 423, American Institute of Physics, pp. 236-248, Bibcode:1998AIPC..423..236B, ISBN 1-56396-725-1, doi:10.1063/1.55085
27. ^ Roth, H. (1932), "A Slowly Contracting or Expanding Fluid Sphere and its Stability", Physical Review, 39 (3): 525-529, Bibcode:1932PhRv...39..525R, doi:10.1103/PhysRev.39.525
30. ^ The science.ca team (2015). "Hubert Reeves - Astronomy, Astrophysics and Space Science". GCS Research Society. Retrieved .
31. ^ "Neil deGrasse Tyson". Hayden Planetarium. 2015. Retrieved .
Further reading
External links
US Cities - Things to Do
|
Non-Hodgkin's lymphoma, [s8]uncommon
This mixed group of lymphomas is difficult to differentiate histologically, so that pathologists sometimes differ. One system of classification (Rappaport) recognizes four types, each of which can be nodular or diffuse.: (1) Well differentiated lymphocytic. (2) Poorly differentiated lymphocytic. (3) Mixed lymphocytic and histiocytic. (4) Histiocytic (reticulum cell). For the purposes of prognosis they are conveniently divided into: (a) Low grade (small cell lymphocytic and follicular). (b) Intermediate or high grade (diffuse, large cell with small cell). An epidemic of B cell lymphomas is now being reported in California in the wake of the AIDS epidemic, so expect to see them elsewhere.
Non-Hodgkin's lymphoma may present as: (1) An enlarged group or groups of lymph nodes. (2) Symptoms caused by enlarged nodes compressing a patient's trachea and/or his bronchi, his biliary tract, his gut, his urinary system, or his spinal cord. (3) Invasion of these structures. (4) Involvement of his central nervous system. (5) Fever; this is unusual and mild.
NON-HODGKIN'S LYMPHOMA The investigations and differential diagnosis are the same as for Hodgkin's lymphoma (30.4).
STAGING is the same as for Hodgkin's lymphoma, but is less important because 90% of patients present in Stages Three or Four. The prognosis depends more on the histological type, than on the clinical stage.
PROGNOSIS. ''Small cell', high, or intermediate grade lymphomas are more likely to respond favourably. ''Large cell' ones are likely to be unfavourable. Nodular cases have a better prognosis than diffuse ones (most children are diffuse ones).
Untreated, low grade cases survive 7 to 8 years, and intermediate or high grade ones 2 or 3 years.
Here is the percentage of patients in whom you can expect a complete remission, the average remission in months, and the median survival time in months.
If the histology is favourable. Single agent (for example cyclophosphamide): complete remission 65%, average remission in months 35[+], median survival 60[+] months. ''COP': complete remission 80%, remission 35[+] months, median survival 60[+] months.
If the histology is unfavourable. ''CHOP': complete remission 68%, average remission 23 months. ''MOPP or C-MOPP': complete remission 41%, average remission 9 months.
In children, a combined regime will produce a complete remission in 80 to 90%, with a 2 year survival of 60 to 70%.
INDICATIONS. Chemotherapy is the mainstay of treatment, and radiotherapy is no better.
If a patient has asymptomatic low grade lymphocytic lymphoma, he is likely to present in Stages Three or Four. It develops so slowly that no treatment is indicated.
If he has a low grade lymphocytic lymphoma with masses of tumour tissue, anaemia, and leucopenia, treat him. A single-dose regime, for example with cyclophosphamide, is satisfactory. If necessary, transfuse him before starting.
If he has a low grade follicular lymphoma, it is likely to progress more rapidly than the lymphocytic variety. It unlikely to respond to single dose therapy, so give him a combined regime.
If he has a lymphoma of intermediate or high grade type, large cell or small cell, give him chemotherapy.
PREPARATION. Prepare him as in Section 32.2.
CYCLOPHOSPHAMIDE. Give him 1 g to 1.5 g/m['2] intravenously every three weeks up to 6 doses.
''COP'. Give him:
Cyclophosphamide 400 mg/m['2] orally on day one to 5. Or, 1 g to 1.5 g/m['2] intravenously on day 1.
Vincristine (''Oncovin') 1.4 mg/m['2] intravenously on day 1.
Prednisolone 100 mg/m['2] on days 1 to 5.
Repeat this course every 3 weeks up to 6 courses if the response is satisfactory. Then repeat it every 3 months.
Or, try ''CHOP' which is ''COP' plus doxorubicin (''Adriamycin') 50 mg/m['2] intravenously every 3 weeks.
Or, try ''MOPP' as for Hodgkin's lymphoma, or ''C-MOPP' which substitutes cyclophosphamide for ''Mustine'.
|
A logo is a graphic element, symbol, or icon of a trademark or brand and together with its logotype, which is set in a unique typeface or arranged in a particular way. A typical logo is designed to cause immediate recognition by the viewer. The logo is one aspect of the brand of a company or economic entity, and the shapes, colors, fonts and images are usually different from others in a similar market. Logos may also be used to identify organizations or other entities in non-economic contexts.
In recent times the term 'logo' has been used to describe signs, emblems, coats of arms, symbols and even flags. In this article several examples of true logos are displayed, which may generally be contrasted with emblems, or marks, which include non-textual graphics of some kind. Emblems with non-textual content are considered one aspect of a complete logo.
Distinct aspects of a complete logo:
- Logotype/Wordmark/Lettermark: text or abbreviated text
- Icon: symbol / brandmark
- Slogan: description of the company
The uniqueness of a logo is often necessary to avoid confusion in the marketplace among clients, suppliers, users, affiliates, and the general public. To the extent that a logo achieves this objective, it may function as a trademark, and may be used to uniquely identify businesses, organizations, events, products or services. Once a logo is designed, one of the most effective means for protecting it is through registration as a trademark, so that no unauthorised third parties can use it, or interfere with the owner's use of it.
There are several elements of a good logo.
- Should be unique, and not subject to confusion with other logos among viewers
- Is functional and can be used in many different contexts while retaining its integrity
- Should remain effective whether reproduced small or large
- Displays basic design principles (space, color, form, consistency, and clarity)
- Represents the brand/company appropriately
Logos today
Emblems (icons) may be more effective than a written name, especially for logos being translated into many alphabets; for instance, a name in the Arabic language would be of little help in most European markets. A sign or emblem would keep the general proprietary nature of the product in both markets. In non-profit areas, the Red Cross (which goes by Red Crescent in Muslim countries) is an example of an extremely well known emblem which does not need an accompanying name. Branding aims to facilitate cross-language marketing. The Coca-cola logo can be identified in any language because of the standards of color and the iconic ribbon wave.
Brand slogans
Sometimes a slogan is included in the logo. If the slogan appears always in the logo, and in the same graphic shape, it can be considered as part of the logo. In this case it is a brand slogan also called a claim, a tagline or an endline in the advertising industry. The main purpose is to support the identity of the brand together with the logo. The difference between a slogan and a brand slogan is that brand slogan remains the same for a long time to build up the brand's image, while different slogans link to each product or advertising campaign.
Logo design
- Avoid being over-the-top in an attempt to be unique
- Use few colors, or try to limit colors to spot colors (a term used in the printing industry)
- Avoid gradients (smooth color transitions) as a distinguishing feature
- Produce alternatives for different contexts
- Design using vector graphics, so the logo can be resized without loss of fidelity
- Be aware of design or trademark infringements
- Do not use a specific choice clip-art as a distinguishing feature
- Do not use the face of a (living) person
- Avoid photography or complex imagery as it reduces the instant recognition a logo demands
Links to Related Articles:
- Ad Specialties or Promotional Products, which is correct?
This is an extract from Wikipedia, the Free Encyclopedia
More great products to advertise your company:
Promotional Magnet With Thick Glossy Acrylic Covering
Promotional Men Sandal With Molded Footbed
|
next up previous
Next: JFFS Version 1 Up: JFFS : The Journalling Previous: JFFS : The Journalling
Flash memory is an increasingly common storage medium in embedded devices, because it provides solid state storage with high reliability and high density, at a relatively low cost.
Flash is a form of Electrically Erasable Read Only Memory (EEPROM), available in two major types -- the traditional NOR flash which is directly accessible, and the newer, cheaper NAND flash which is addressable only through a single 8-bit bus used for both data and addresses, with separate control lines.
These types of flash share their most important characteristics -- each bit in a clean flash chip will be set to a logical one, and can be set to zero by a write operation.
Flash chips are arranged into blocks which are typically 128KiB on NOR flash and 8KiB on NAND flash. Resetting bits from zero to one cannot be done individually, but only by resetting (or ``erasing'') a complete block. The lifetime of a flash chip is measured in such erase cycles, with the typical lifetime being 100,000 erases per block. To ensure that no one erase block reaches this limit before the rest of the chip, most users of flash chips attempt to ensure that erase cycles are evenly distributed around the flash; a process known as ``wear levelling''.
Aside from the difference in erase block sizes, NAND flash chips also have other differences from NOR chips. They are further divided into ``pages'' which are typically 512 bytes in size, each of which has an extra 16 bytes of ``out of band'' storage space, intended to be used for metadata or error correction codes. NAND flash is written by loading the required data into an internal buffer one byte at a time, then issuing a write command. While NOR flash allows bits to be cleared individually until there are none left to be cleared, NAND flash allows only ten such write cycles to each page before leakage causes the contents to become undefined until the next erase of the block in which the page resides.
Flash Translation Layers
Until recently, the majority of applications of flash for file storage have involved using the flash to emulate a block device with standard 512-byte sectors, and then using standard file systems on that emulated device.
The simplest method of achieving this is to use a simple 1:1 mapping from the emulated block device to the flash chip, and to simulate the smaller sector size for write requests by reading the whole erase block, modifying the appropriate part of the buffer, erasing and rewriting the entire block. This approach provides no wear levelling, and is extremely unsafe because of the potential for power loss between the erase and subsequent rewrite of the data. However, it is acceptable for use during development of a file system which is intended for read-only operation in production models. The mtdblock Linux driver provides this functionality, slightly optimised to prevent excessive erase cycles by gathering writes to a single erase block and only performing the erase/modify/writeback procedure when a write to a different erase block is requested.
To emulate a block device in a fashion suitable for use with a writable file system, a more sophisticated approach is required.
To provide wear levelling and reliable operation, sectors of the emulated block device are stored in varying locations on the physical medium, and a ``Translation Layer'' is used to keep track of the current location of each sector in the emulated block device. This translation layer is effectively a form of journalling file system.
The most common such translation layer is a component of the PCMCIA specification, the ``Flash Translation Layer'' [FTL]. More recently, a variant designed for use with NAND flash chips has been in widespread use in the popular DiskOnChip devices produced by M-Systems.
Unfortunately, both FTL and the newer NFTL are encumbered by patents -- not only in the United States but also, unusually, in much of Europe and Australia. M-Systems have granted a licence for FTL to be used on all PCMCIA devices, and allow NFTL to be used only on DiskOnChip devices.
Linux supports both of these translation layers, but their use is deprecated and intended for backwards compatibility only. Not only are there patent issues, but the practice of using a form of journalling file system to emulate a block device, on which a ``standard'' journalling file system is then used, is unnecessarily inefficient.
A far more efficient use of flash technology would be permitted by the use of a file system designed specifically for use on such devices, with no extra layers of translation in between. It is precisely such a filesystem which Axis Communications AB released in late 1999 under the GNU General Public License.
next up previous
David Woodhouse 2001-10-10
|
Father of Algebra
Only available on StudyMode
• Download(s) : 388
• Published : July 29, 2012
Open Document
Text Preview
The Father of Algebra
In the source, Shawn Overbay writes a biography on The Father of Algebra, Al-Khwarizmi. Overbay shows and explains the equations that Al-Khwarizmi invented and how they were used. In the source, the author states “Al-Khwarizmi wrote numerous books that played important roles in arithmetic and algebra” (Overbay). Not only was The Father of Algebra a mathematician, he was also an inventor, an Astronomer, and a Scholar. The visual source is a page from Al-Khwarizmi’s Kitab Al-Jabr Wal-Muqabala, the oldest Arabic works on algebra. Comparing the visual source and the written source helps historians understand how our modern day mathematics was born and how they played a role in the 9th Century. These sources enhance the understanding of algebraic equations and arithmetic that was used in the 9th century and how it is still used in the modern day era.
We can learn a lot about The Father of Algebra, Al-Khwarizmi from these sources. Shawn Overbay goes into great detail on the Mathematician’s work. In the Latin translation of Al-Khwarizmi’s algebra, Overbay talks about simple equations that the mathematician created, squares equal to roots (x2 = square root of 2), squares equal to numbers (x2 = 2), roots equal to numbers (square root of x = 2), squares and roots equal to number (x2+3x = 25), squares and numbers equal to roots (x2+1 = 9), and roots and numbers equal to squares (3x+4 = x2). One of the more complex equations Al-Khwarizmi used was the quadratic equation. This equation is used to solve for the unknown which in this equation would be x (ax2+bx+c=0). When in that specific form, Al-Khwarizmi is asking, what is x when the function is equal to zero? Al-Khwarizmi’s theory was that when you take two numbers and add them together you will get c, those same two numbers multiplied together will give you b, therefore factoring down to (ax+u=0) and (ax+u=0). At this point it’s just basic arithmetic to solve for x and you should get two...
tracking img
|
Florence Nightingale
Only available on StudyMode
• Download(s) : 168
• Published : November 30, 2012
Open Document
Text Preview
On her death in 1910, Florence Nightingale left a vast collection of reports, letters, notes and other written material. There are numerous publications that make use of this material, often highlighting Florence’s attitude to a particular issue. In this paper we gather a set of quotations and construct a dialogue with Florence Nightingale on the subject of statistics. Our dialogue draws attention to strong points of connection between Florence Nightingale’s use of statistics and modern evidence-based approaches to medicine and public health. We offer our dialogue as a memorable way to draw the attention of students to the key role of data-based evidence in medicine and in the conduct of public affairs. 1. Introduction
1.1 Who Was Florence Nightingale?
Florence Nightingale (1820 - 1910), hereafter referred to as FN, made remarkable use of her ninety years of life. She was the second of two daughters, born in England to wealthy and well-connected parents. There were varied religious influences. Her parents both came from a Unitarian religious tradition that emphasized “deeds, not creeds”. The family associated with the Church of England (Baly 1997b) when property that FN's father had inherited brought with it parochial duties. A further religious influence was her friendship with the Irish Sister Mary Clare Moore, the founding superior of the Roman Catholic Sisters of Mercy in Bermondsey, London. Her father supervised and took the major responsibility for his daughters’ education, which included classical and modern languages, history, and philosophy. When she was 20 he arranged, at FN’s insistence, tutoring in mathematics. These and other influences inculcated a strong sense of public duty, independence of mind, a fierce intellectual honesty, a radical and unconventional religious mysticism from which she found succour in her varied endeavours, and an unforgiving attitude both toward her own faults and toward those of others.
At the age of 32, frustrated by her life as a gentlewoman, she found herself a position as Superintendent of a hospital for sick governesses. Additionally she cooperated with Sidney Herbert, a family friend who was by now a Cabinet minister, in several surveys of hospitals, examining defects in the working conditions of nurses. On the basis of this and related experience she was chosen, in 1854, to head up a party of nurses who would work in the hospital in Scutari, nursing wounded soldiers from the newly declared Crimean war. Her energy and enthusiasm for her task, the publicity which the Times gave to her work, the high regard in which she was held by the soldiers, and a national appeal for a Nightingale fund that would be used to help establish training for nurses, all contributed to make FN a heroine. There was a huge drop in mortality, from 43% of the patients three months after she arrived in Scutari to 2% fourteen months later, that biographers have often attributed to her work.
Upon her return to England at the end of July 1856 FN become involved in a series of investigations that sought to establish the reason for the huge death rate during the first winter of the war in the Crimea. Theories on the immediate cause abounded; was it inadequate food, overwork, lack of shelter, or bad hygiene? In preparation for a promised Royal Commission, she worked over the relevant data with Dr William Farr, who had the title “Superintendent of the Statistical Department in the Registrar-General’s Office”. Farr’s analysis persuaded her that the worst affects had been in Scutari, where overcrowding had added to the effect of poor sanitation. Sewers had been blocked, and the camp around had been fouled with corpses and excrement, matters that were fixed before the following winter. The major problem had been specific to Scutari. FN did not have this information while she was in the Crimea. The data do however seem to have been readily available; they were included in a report prepared by...
tracking img
|
Why and How Have Liberals Supported the Fragmentation of Power
Only available on StudyMode
• Download(s) : 922
• Published : November 26, 2012
Open Document
Text Preview
Why and how have liberals supported the fragmentation of political power? (15)
Liberals are concerned about power, most basically, because power constitutes a threat to liberty. Their concern about concentrations of power is rooted in their emphasis upon individualism and its implication that human beings are rationally self- interested creatures. Egoism determines that those who have the ability to influence behaviour of others are inevitably inclined to use that ability for their own benefit and therefore at the expense of others. The greater the concentration of power, the greater will be the scope of rulers to pursue self-interest and, thus, the greater corruption. Lord Acton stated "Power tends to corrupt, and absolute power corrupts absolutely. Great men are almost always bad men." behind Acton’s famous quote about power and corruption, he points out the liberal belief that, since human beings are individuals and therefore egoistical, they are bound to use power - the ability to influence the behaviour of others to benefit themselves and they will use, or abuse, others in the furtherance of that goal. In essence, the greater the power the greater the scope for using and abusing others in the pursuit of self-seeking ends. Such thinking has shaped liberalism in a number of ways. In particular it has encouraged them to endorse the principle of limited government brought through constitutionalism and democracy. Liberals thus support, for example, codified constitutions, bills of rights, the separation of powers, federalism or devolution, as well as regular, free and fair elections, party competition and universal suffrage. Constitutionalism delivers limited government either by legally ring-fencing government (e.g., codified constitutions and bills of rights or fragmenting government power so creating a network of checks and balances (e.g., the separation of powers, bicameralism and federalism). Democracy delivers limited government because it bases...
tracking img
|
a child’s one- or two-piece outer garment for cold weather, often consisting of heavily lined pants and jacket.
Read Also:
• Snow-thrower
noun 1. snow blower.
• Snow-tire
noun 1. an automobile tire with a deep tread or protruding studs to give increased traction on snow or ice.
• Snow-train
noun 1. a train that takes passengers to and from a winter resort area.
• Snowtubing
noun 1. the sport of moving across snow on a large inflated inner tube
|
Armor Test Standards: A Brief Comparison
Posted: March 4, 2014 in Armor Testing
Tags: , ,
Ever since the late 15th century, with the advent of powder-propelled projectile weapons (and indeed, pre-dating that time with crossbows), armor smiths have sought a way to ensure their product will resist (within reasonable limits) the injurious tendencies of fast moving bits of metal.
Smiths making plate armor would often shoot the finished product (with crossbow bolts prior to lead muzzle loader round balls arriving on the scene), and upon visual confirmation of a successful “stop,” would engrave their maker’s mark, showing the armor had passed. This methodology was (and is) referred to as “bench testing” or “proof testing.” An article of protective gear is subjected to one or more ballistic events, and when it either passes or fails, provides the maker, and the end user, with evidence that it is effective, or “proofed.” Many modern firearms bear similar proof marks after being subjected to equivalent testing.
Bench testing is still used today, and while it has certain advantages, it also has some notable drawbacks. For instance, if large numbers of finished articles need to be produced, it becomes ungainly to batch test each lot (a neccesary requirement to verify efficacy of the finished product). It is also subject to the whims of either the maker or the end user. Bench testing is much more suited to custom, or small-batch manufacturers of armor.
Bench testing remained the norm until the 70’s, when it became necessary to certify large numbers of concealable vests. The National Institute of Justice (NIJ) scrambled to come up with a way of testing and certifying large quantities of vests. The “NIJ Rating” methodology was created, which should be familiar to everyone with any exposure to armor. The rating levels go from I to IV, and are directly proportional to what threats are stopped. I-IIIA are for soft armor, while III and IV relate to hard (rifle) armor. The tests, in a nutshell, subject a batch of test armor articles to successive rounds of ballistic testing. This ranges from 2 to up to 240 rounds fired, and newer iterations of the tests have become more stringent. However, there are some issues with NIJ testing, which will be discussed in a later post.
In just the past few years, two new test protocols have appeared on the scene: the FBI and DEA test protocols. Publicly released in 2006, the FBI protocol was a vast improvement over the NIJ protocol, subjecting the armor to much more realistic and useful tests. In addition to simply shooting the armor, the FBI/DEA protocols subject the armor to extreme heat, cold, immersion as well as conditioning the armor and subjecting it to flash/flame. While there are still some issues (one in particular that is shared by the NIJ tests), they are far superior to the earlier protocols.
Moving forward, there is still room for improvement regarding armor test standards. This has been a brief overview of the most typical test protocols for modern body armor. In future posts, more detailed analysis of each protocol will be given. Thanks for reading.
Leave a Reply
You are commenting using your account. Log Out / Change )
Twitter picture
Facebook photo
Google+ photo
Connecting to %s
|
In the Forest
A mountain lion watched a young doe from a rocky outcrop near a forest glade. It was cool in the glade, which was not far from an open meadow where several other deer were. There was a newly fallen tree between the doe and the lion. The lion crept closer, looking down from above.
The doe sensed that this was not a safe place, but she was attracted by the browse which had been brought within reach. She was eating peacefully.
The lion sprang. Though it had to clear the log, the kill was quick.
A raven entered the forest and was very much interested. It watched from a lower tree limb, cawing for reinforcements. None came.
Its hunger sated, the lion pulled some brush over the carcass and withdrew to rest near the foot of the rocky outcrop. From this secluded position, it watched the raven inspect the kill.
Copyright 2013 George Lowell Tollefson
Leave a Reply
You are commenting using your account. Log Out / Change )
Twitter picture
Facebook photo
Google+ photo
Connecting to %s
|
Digging for coal on public land? Not on our watch
This week, Trump signed executive orders that, among other things, rescinded a moratorium on new coal permits on public land. Department of Interior Secretary Ryan Zinke quickly followed with an order rolling back the moratorium.
The Obama administrations had placed the moratorium on permits because the fees collected hadn’t been adjusted in decades, and the American taxpayers were not getting good value for their money. Zinke ostensibly gives a nod to this concern by chartering a committee to look into the fees but sees no reason not to give out permits in the meantime.
A lawsuit was immediately filed by a coalition of environmental and citizen groups and the Northern Cheyenne Tribe. The groups are Earth Justice, Sierra Club, Citizens for Clean Energy, Montana Environmental Information Center, Center for Biological Diversity, Defenders of Wildlife, and Wildlife Guardians.
According to the Northern Cheyenne Tribe’s President, L. Jace Killsback:
The complaint states:
This case challenges a decision by the Secretary of the Interior (“Secretary”) to repeal a year-old moratorium on federal coal leasing by the Bureau of Land Management (“BLM”) and abandon programmatic environmental review of the federal coal leasing program. The prior presidential administration found this moratorium essential to ensure that such leasing is conducted, if at all, in a manner consistent with BLM’s environmental obligations and mandate to secure a fair economic return to U.S. taxpayers from publicly owned coal. In repealing the moratorium, Defendants Secretary of the Interior, Department of the Interior, and BLM (collectively, “Defendants”) opened the door to new coal leasing and its attendant consequences without first performing an environmental review evaluating the program’s significant environmental, health, and economic impacts—including impacts from climate disruption caused by the burning of fossil fuels such as coal, and socioeconomic and environmental impacts to local communities. In doing so, Defendants violated the National Environmental Policy Act (“NEPA”), 42 U.S.C. §§ 4321-4370h.
The organizations do have a strong legal challenge, as Zinke’s action does violate long-established NEPA procedures.
What is NEPA? From NEPA.gov:
President Nixon signed NEPA into law on January 1, 1970. NEPA set forth a bold new vision for America. Acknowledging the decades of environmental neglect that had significantly degraded the nation’s landscape and damaged the human environment, the law was established to foster and promote the general welfare, to create and maintain conditions under which man and nature can exist in productive harmony, and fulfill the social, economic, and other requirements of present and future generations of Americans. NEPA was the first major environmental law in the United States and is often called the “Magna Carta” of Federal environmental laws. NEPA requires Federal Agencies to assess the environmental effects of their proposed actions prior to making decisions. To implement NEPA’s policies, Congress prescribed a procedure, commonly referred to as “the NEPA process” or “the environmental impact assessment process.”
The ultimate goal of the NEPA process is to foster excellent action that protects, restores, and enhances our environment. This is achieved through the utilization of environmental assessments (EAs) and environmental impact statements (EISs), which provide public officials with relevant information and allow a “hard look” at the potential environmental consequences of each proposed project.
NEPA will figure prominently in most environmental lawsuits against Trump’s administration.
Earth Justice, Center for Biological Diversity, and the Northern Cheyenne Tribe are providing the legal expertise and resources for this lawsuit.
Print Friendly, PDF & Email
|
Civet Cats and the SARS Virus Found in China
When a "Cat" is Not a Cat
Civet cat
Peter Chadwick / Getty Images
In 2003, researchers in the southern part of China detected coronaviruses closely related to the SARS virus in three wild animal species, including the civet "cat," which are sold in markets there for food consumption. The coronaviruses were found in the masked palm civet, a tree-dwelling animal with a raccoon or weasel-like face and a catlike body; and the raccoon dog. The third species, a Chinese ferret badger, was found to produce antibodies to the SARS virus.
Civet Cats as Delicacies
These wild animals are considered great delicacies in China and are bred there for human consumption. Kept in squalid surroundings, stacked cage upon cage, they can be found in markets throughout southern China. While it has been thought that the virus could have been transmitted by food handlers, this discovery left several questions unanswered, according to the World Health Organization:
• Did these wild animals infect the food handlers or vice versa?
• Can the virus be spread animal-to-animal, by eating infected prey?
• How widespread is the SARS infection in food animals? (The tests involved only the wild animals from one market.)
• Can the virus be transmitted to humans from the eating of an infected animal?
The WHO on Civet Cats and SARS in China
According to an article on the WHO website, "Much more research is needed before any firm conclusions can be reached. At present, no evidence exists to suggest that these wild animal species play a significant role in the epidemiology of SARS outbreaks. However, it cannot be ruled out that these animals might have been a source of human infection."
Of particular interest is the fact that the first outbreaks in Guandong province came shortly after the Chinese began importing civets for food from Vietnam. Hong Kong already outlaws the use of these animals as a food source. To mitigate the possibility that the virus actually is transmitted by palm civets and the other wild animals tested, the World Health Organization recommended proper sanitation safeguards in handling animals used for food, along with fully cooking the meat before consumption, as the virus cannot withstand the heat used in thorough cooking.
Since the first outbreaks of SARS in Guandong province of China, scientists from WHO have suspected some sort of animal-human link of the coronavirus. The mutation of a virus strain from animals, allowing it to jump to humans, is a common cause of new illnesses in humans. The test food animals were taken from markets in Shenzhen, Guandong, and included 25 animals, representing eight species. All six civet cats were found to have the SARS virus. Animals found to test negative included the Chinese hare, Chinese muntjack (a type of deer), beaver, and domestic cats.
What Exactly Is the Civet Cat?
The civet is a mostly nocturnal animal, from the Viverridae family, found in Africa and the East Indies. It is approximately 17-28 inches in length, excluding its long tail, and weighs about 3 to 10 pounds. Although classified within the Carnivera order, the palm civet of Southern Asia (named because it can be found in palms), is a fruit-eating mammal. Although the Viverridae family is distantly related to the Felidae family of which the common domestic cat is a member, the civet "cat" is not a cat. Indeed, it is more related to the mongoose than to any cat.
The civet is a cunning-looking little animal, with a catlike body, long legs, a long tail, and a masked face resembling a raccoon or weasel.
In some areas of the world, it has become an endangered species, hunted for its fur or as a food source. The civet's taste for fruit has been its downfall in at least one area of southeast Asia; as early as the 18th century, the durian fruit was also called "civet fruit," because it was used as bait for catching civets.
Civet Diet and Natural Habitat
The civet not only is fond of fruit but has had a love-hate relationship with growers of a particular coffee bean in Vietnam. Civets love this bean and search out the tastiest examples with their long, foxlike nose. The hardest beans survive the digestive process of the civet and are prized in caphe cut chon, or fox-dung coffee (Vietnamese call the civet a "fox").
Unfortunately for the civets, their habitat has been razed for new coffee orchards, and their decline has furthered because of the Vietnamese appetite for barbecued civet meat.
A restauranteur admitted that he was not troubled by the scarcity of Caphe cut chon, saying that he'd rather "eat the fox." Actually, the new scarcity of fox-dung coffee beans has been a boon for entrepreneurs who market fake caphe cut chon as the real thing. However, that doesn't help the fate of the civet cats who are killed for food.
Last, the civet has been the source of a highly-valued musk which is used as a stabilizing agent in perfumes. Although civets were at one time killed for their musk, they more recently have been "recycled" for this purpose. Also called "civet," excretions are scraped from the civet's perineal glands, a painful process. Both male and female cats produce these strong-smelling excretions. At least one civet cat farmer in Ethiopia raises civets for their musk, although this practice is dying out as perfumers move toward using synthetic fixatives.
Maligned, abused, and beleaguered, the civet cat has an unknown future on many fronts.
But it is not a cat.
|
President Obama is going to restore Denali as the name of Alaska's Mount McKinley, siding with the state of Alaska in ending a 40-year battle over the name of the peak. (Reuters)
Some presidents have their names placed on schools, or airports, or highways. William McKinley's name graced a mountain—the tallest in North America, no less.
Not any more. On Monday, Barack Obama officially renamed Alaska's Mount McKinley, returning the giant peak to the original name of Denali, or "great one," given to it by the Athabascan people. McKinley, a former Ohio governor who was president at the end of the 19th century, was originally given the honor by a gold prospector who liked his support of the gold standard. The name stuck, though it has been a source of debate for years.
For starters, Ohio's political leaders aren't too happy about the name change. "There is a reason President McKinley's name has served atop the highest peak in North America for more than 100 years, and that is because it is a testament to his great legacy," Speaker John Boehner (R-Ohio) said in a statement issued Sunday night.
But how monumental, really, is McKinley's legacy? It seems a fitting moment to ask historians what kind of president the country's 25th chief executive really was.
Check the rankings of American presidents by historians or political scientists, after all, and he comes out above average, even underrated, but hardly top tier. A composite of recent presidential rankings by stats wizard Nate Silver found that McKinley came in at 19th among the 43 men who have held the office.
"He tends to be stuck in the middle—not great but not terrible," said Brandon Rottinghaus, a political science professor at the University of Houston who did his own ranking last year. "The problem is this is where presidential legacies go to be forgotten," he said. The ones whose tenures were lukewarm are taught less often in schools and chronicled by fewer historians. "He's kind of victim to this sort of zone of forgotten presidents."
[Guidance for the negative thinker in all of us]
He also suffers from directly preceding Theodore Roosevelt in office, standing in the shadow of a man thought to have started the modern presidency and widely considered both on the right and left as one of America's greatest presidents.
But that's not a fully fair assessment of McKinley's time in office, say some historians. "I've never called him great, but I do think he was effective and important," said Lewis Gould, an emeritus professor of history at the University of Texas who wrote a biography of the 25th president. And when it comes to presidents, he said, that's "not a bad standard."
U.S. President William Mckinley, left, sits with his cabinet during a meeting in the White House. This photo is circa 1898. (AP Photo)
McKinley's presidency is often most remembered for two things: his 1896 election ushered in a period of Republican dominance, and he widely expanded the country's involvement in foreign affairs. The Spanish-American War was fought during his presidency, which led to the annexation of the Philippines, foreshadowing future complex conflicts around the globe.
"It was under McKinley that the United States took on a much more prominent role internationally," said Robert Saldin, a professor at the University of Montana who has studied McKinley. "It has amazing parallels to America’s involvement in world affairs today—for good and for bad—and the first time where America really stepped onto the world stage in a major way."
Yet how those milestones are viewed by history has shifted significantly over the years. While he was remembered favorably in the years right after his death—McKinley is one of the four presidents to be assassinated in office—his reputation later fell, aided in particular by those who considered him an imperialist. Up until the 1960s, Gould said, McKinley was seen as a weak president, manipulated by those around him, too easily pushed into foreign involvement. "There was one famous little quip that he had the backbone of a chocolate eclair," Gould said.
But then a historian named L. Wayne Morgan wrote a book in the 1960s that asserted McKinley was a better leader than many realized, and Gould followed with a book in the early 1980s that argued McKinley was the first modern president, not Roosevelt. As the University of Virginia's Miller Center on the presidency writes on its Web site: "He is now viewed as a President who tried mightily to avoid war ... who acted decisively when all the diplomatic cards had been played, and who asserted great presidential authority over his cabinet and generals."
McKinley broke with many precedents, historians say. In the past, presidents didn't speak directly with the public on policy issues and didn't campaign on behalf of their fellow party members or themselves. McKinley did both. He was talking about leaving the continental United States to visit Hawaii and Puerto Rico before his death, Gould said, something no president had done in office before. And he held press briefings, leaked news to reporters, and used mailings and printed propaganda.
"In terms of presidential leadership and style, he looks a little bit more like a modern president," said Saldin. "He’s been misplaced as just a standard-issue, old-style president where really he was a key force" in what would become a more contemporary style of presidential leadership.
Gould, who taught an independent study course to Karl Rove in the late 1990s about McKinley's election that not only helped inspire Karl Rove's strategy in the past but his soon-to-be released book, says McKinley's leadership style is something of an enigma. He was known to be a good listener, polite to Democrats, but also very focused on what he wanted. His secretary of war, Gould recounted, once said McKinley was utterly "indifferent to credit but he always had his way."
So would he care that his name was being removed from Denali? Gould thinks it's unlikely. Speaking to his secretary in late 1899, McKinley said “that’s all a man can hope for during his lifetime — to set an example — and when he’s dead, to be an inspiration for history.”
As Gould says, "he wasn’t interested in having a lot of memorials built up for him."
Read also:
Federal agencies are failing when it comes to managing their employees
Denali or McKinley?
|
Analysis of Richard Cory by Edwin Arlington Robinson Essay
No Works Cited
Length: 786 words (2.2 double-spaced pages)
Rating: Yellow
Open Document
Why does everyone want to be like someone else? It is human nature to want to be admired and honored. This is not right, though. Each and everyone person should be happy with who they are because just imagine if everyone were perfect and the same. The world would be quite boring. Edwin Robinson clearly shows us in his poem "Richard Cory" that the life of someone else may not be all what it is cracked up...
This essay is 100% guaranteed.
Title Length Color Rating
Analysis of Richard Cory, by Edwin Arlington Robinson Essay - Poetry is central to the English language as both a communication tool and as a cultural heritage that dates back to antiquity. Poetry is a diverse and complex art that takes a life time to decipher the poet’s intent and motivation in a poetic literature. This paper explores the content and stylist imbued meaning in Robinson Edwin Arlington 1897 poem; Richard Cory. “Richard Cory” is a sixteen stanza poem that narrates the rich, elitist and nobility, but socially unfulfilling life of a man bearing the name that forms the title of the poem.... [tags: Richard Cory Analysis]
:: 3 Works Cited
1184 words
(3.4 pages)
Strong Essays [preview]
(2.7 pages)
Better Essays [preview]
Wealth Envy in Richard Cory, by Edwin Arlington Robinson Essay - All too often, those who have little money envy people with more. This is depicted in “Richard Cory” written by Edwin Arlington Robinson, the narrator describes Richard as if he were royalty; rich, worldly, well spoken, and educated (677). He wished he could be Richard, and live with all the pleasures afforded the wealthy. Is it possible Richard had the reverse in his mind when he ended his life. Money appears to be a key that unlocks happiness to people on the lower end of the financial spectrum.... [tags: Richard Cory Analysis]
:: 5 Works Cited
1405 words
(4 pages)
Powerful Essays [preview]
Richard Cory, by Edwin Arlington Robinson Essay - A. Title: The title of this poem suggests that it is about a man, possibly a man people like and possibly a man they do not like. From the vagueness of the title the man could be an outcast. B. Paraphrase: When Richard Cory goes downtown, people look at him. He was dressed nice from head to toe, clean and very thin. He was alway quietly well-ordered and human when he talked. But he fluttered when he said, "Good morning," and glittered when he walked. He was richer than a king and very well mannered and graceful.... [tags: Richard Cory Analysis] 464 words
(1.3 pages)
Strong Essays [preview]
(1.6 pages)
Good Essays [preview]
(1.9 pages)
Good Essays [preview]
(1.4 pages)
Good Essays [preview]
Richard Cory: Comparing Paul Simon and Edwin Robinson Essay - Richard Cory poems are a traditional type of poetry found all throughout different time periods. The poems range from the original to song variations, all contributing their own perspectives on what Richard Cory symbolized, and each takes their own distinct form. Richard Cory poetry usual contains the distinct ending of Richard Cory taking his own life, but each poem adds its own variations to this repetitive theme. Throughout the poems, there are also many similar themes, which portray a consistent theme of the American Dream and how it transforms.... [tags: Richard Cory Analysis] 620 words
(1.8 pages)
Better Essays [preview]
Essay on Interpretation of Richard Cory, by Edwin Arlington Robinson - The poem, "Richard Cory" by Edwin Arlington Robinson is the classic pity-the-star story. It has been rumored that some people worshipped by the public eye are just regular people with regular problems, but honestly how big could their problems be. Richard Cory seems to be one of those heart-stopping, rolex-wearing famous people who had a regular problem or two. In scanning the poem line by line, its is easier to uncover meaning. The first line of the poem suggests that Richard Cory wasn't a common person among the people.... [tags: Richard Cory Analysis] 1260 words
(3.6 pages)
Better Essays [preview]
(0.6 pages)
Strong Essays [preview]
|
How to help headaches with permanent prevention and cure methods
Headaches are various, and arise from various causes; similarly, headache cure come in various forms as show in this article. Scientifically, more than 200 types of headaches fall into the primary and secondary classification. From the number, about three-quarters are in the secondary category. The following information assists anyone in finding headache help and headache prevention strategies.
Primary and secondary causes of headaches
Primary causes of headaches imply that the cause is the issue that directly leads to the ache. A good example is migraines. Secondary causes arise from different symptoms or problems in the body, whose manifestation includes an ache. People suffering from hemorrhagic stroke experience headaches, but it is not the headache-causing stroke. Here, the stroke-caused bleeding in the brain, leads to secondary ache.
Under primary headaches, adults experience migraines, tensions, clusters and other distinctive types. Tensions and migraines are the most common. Tension headaches affect at least 30% of all adults. Treating tension headaches involves the use of counter medications for the symptoms. Migraines on the other hand are the second most common types of headaches. They can come with a sensation before the actual pain, or without the premonition.
When you visit a physician with a headache problem, the first thing he/she does is to classify the condition as primary or secondary headache. Patients looking for headache help should not worry about the cause of headache because this does not help the condition.
The first remedy appropriate for headaches is the response to symptoms. Use over the counter painkillers or approved traditional methods of reducing pain. If the pain does not go away, patients should consult a physician or nurse.
Headache cure and headache prevention methods
Headaches arise from an imbalance in the body metabolism or muscle load allocation. Primary headaches manifest when an external condition causes the imbalance. Secondary headaches come from an imbalance caused by an already existing health condition in the body.
Headache cure and headache prevention is achievable through the following. Reduce alcohol consumption, increase water intake, reduce red wine, limit coffee consumption, and moderate chocolate intake. You must also adopt a regular mealtime where you eat properly. Your meals should have enough omega-three fatty acids to reduce most headache symptoms.
Overall, get rid of fried foods, or reduce the intake. Lemon juice and cinnamon water work tremendously as headache cure or headache prevention remedies.
How to help headaches with physical activity
Headache help is only a physical activity away, for most people who need a non-medical intervention. The first and most effective way for headache help is to get enough sleep. The second one is to breathe in fresh air consistently. Dirty air leads to complications in body metabolism.
Other common activities recommended for people seeking knowledge on how to help headaches are mild exercises like walking, hot showers for muscle relaxations and messages. Still on the question of how to help headaches, aromatherapy soothes the body, reduces stress and headaches. Meditation and other relaxation techniques have the same effects as aromatherapy. If this sounds complicated, more solutions of how to help headaches also exist. Depending on the availability of teachers and institutions, individuals can practice yoga, and use acupuncture.
Leave a comment
|
mini-Temple Article BINA
. . . .
by Rabbi Yaacov Chaiton
Is there any particular significance behind the general synagogue set-up the way we have it today?
The synagogue is referred to in Rabbinic literature as a "Mikdash Me'at" - a "mini temple", a model of sorts of the Holy Temple which stood in Jerusalem. This concept is the greatest influence behind the synagogue structure which reflects the layout of the Temple. This is also the reason why a synagogue is built facing Jerusalem - highlighting the fact that our synagogues are made to model the Temple in Jerusalem.
Here are some examples:
The holiest section in the Temple was the "Kodesh Hakedoshim" - "The Holy of Holies". This was a small room in the front of the temple that housed the ark, the tablets and a Torah scroll written by Moses himself. This holy section was separated from the rest of the temple by a beautifully embroidered curtain called the "Paroches". Today this is represented by the "Aron Hakodesh" - the "Holy Ark" - at the very front of every synagogue. This is the focus of holiness in the synagogue because like in the Temple, it contains the holiest object - the Torah scrolls. You will notice that the ark is covered by a curtain. This is not just a decorative feature but is made to represent the embroidered curtain in front of the "Holy of Holies" in Jerusalem.
The Bima - the podium from which the torah is read - represents the sacrificial alter ("Mizbeach") that stood in the Temple courtyard at a distance from the "Holy of Holies". It is for this reason that the bima is placed towards the middle of the synagogue. In fact, the reason that we encircle the bima with our lulavim on the festival of sukkot is because in the times of the Temple they would encircle the alter with their lulavim.
You might also notice that in most synagogues there is a little lamp or electric light, usually found on top of the Aron Hakodesh, which remains lit at all times. This light is called the "Ner Tamid" - The Eternal Light - and reminds us of the Menorah in the Temple which always had at least one candle burning.
Many larger synagogues are built with a women's gallery - known as the "Ezrat Nashim". This too is modelled on the structure of the Temple where the ladies section was built on the upper level, surrounding the men at the bottom on three sides.
In the temple, the priests were obligated to wash their hands and feet at a special water urn known as the "Kiyor" - situated in the outer part of the Temple, before commencing their daily service. For this reason most synagogues are built with some sort of basin outside of the prayer area, so we too can wash our hands before commencing our service.
Because the Temple was the place where the Divine Presence rested, it mandated that all those visiting treat it with the utmost respect and sanctity. So too, our "mini Temples" are a holy place and we are expected to conduct ourselves in them with all due reverence and holiness.
Leave a Reply
Sign up to receive our Newsletter:
|
Home * People * Bernhard Walter
Bernhard Walter,
a German chess endgame researcher and programmer, who according to Karsten Bauermeister implemented compact endgame tablebases and endgame analysis tools for the Atari ST [1]. Along with Ingo Althöfer as published in ICCA Journal, Vol. 17, No. 2 in 1994, Walter defined weak zugzwang, which does not require a drastic change on the won-draw-lost value of a position, but in won endgames, a longer distance to mate for the side to move, and represented statistics concerning weak zugzwang in KQK, KRK, KBBK, and KBNK [2].
1. ^ Schachcomputer Geschichte von Karsten Bauermeister (German)
2. ^ Ingo Althöfer, Bernhard Walter (1994). Weak Zugzwang: Statistics on some Chess Endgames. ICCA Journal, Vol. 17, No. 2
What links here?
Up one level
|
Secondary storage
From Computer History Wiki
Jump to: navigation, search
Secondary storage refers to all forms of data storage other than main memory; data in secondary storage generally had to be brought into main memory before it can be operated upon.
Over time there have been many forms of secondary storage: magnetic tape is one of the oldest, and is still in use today. Disks are almost as old, and again, still in use (although the rise of solid state mass storage such as SD memory cards may make them obsolete).
There are a number of types which are now obsolete: punched cards, paper tape, etc.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.