text
stringlengths
188
632k
Quantum Levitation Will Blow Your Mind Scientists use superconductors and magnets to make magic Let me preface this by dispelling any thought that you might have that I know anything about the quantum physics that makes all of this possible: I don’t know anything about the Quantum physics that makes this possible. But I do know something amazing when I see it. And this, my friends, kicks ass. This demonstration video, courtesy of the Tel-Aviv University and the Association of Science-Technology Centers (ASTC), has been making the viral rounds today. In saying that, I mean that I’ve seen dozens of social media shares of the video and it has been sitting on the front page of Reddit all day. Once you see it, you’ll see why. The demonstration is in something called Quantum Levitation, a phenomenon that results from the fact that superconductors and magnets tend to not like each other. They start with a crystal “wafer” and coat it with a thin layer of a ceramic material called yttrium barium copper oxide. The thing about that material is that it has no awesome properties on its own – but once you cool it to below -185 degrees Celsius, it becomes a superconductor. So they drop it in liquid nitrogen and there you have it. Here’s where I’ll let the real scientists take over: Superconductivity and magnetic field do not like each other. When possible, the superconductor will expel all the magnetic field from inside. This is the Meissner effect. In our case, since the superconductor is extremely thin, the magnetic field DOES penetrates. However, it does that in discrete quantities (this is quantum physics after all! ) called flux tubes. Inside each magnetic flux tube superconductivity is locally destroyed. The superconductor will try to keep the magnetic tubes pinned in weak areas (e.g. grain boundaries). Any spatial movement of the superconductor will cause the flux tubes to move. In order to prevent that the superconductor remains “trapped” in midair. The term they keep using is “locked in space.” And once you see it move and tilt, you’ll see why This is amazing. Next step - hoverboards For more info and cool vids check http://www.webpronew...ur-mind-2011-10 Edited by Saber, October 19, 2011 - 08:05 AM.
“The concept is fascinating to a lot of people. Some could not resist performing a complete mathematical analysis.. ” - Dave Trott The stars appear to move because of the motion of the earth. If you are using a telescope at high power or are doing photography, you will need to have a telescope that “follows” them. Over the years, I have built several mounts for that purpose: This is the double-arm “barn door” drive which I invented many years ago. To get the complete story on this innovation read my article from Sky and Telescope magazine Feb.1988 and the follow-up article in April 1989 reprinted below with the kind permission of Sky and Telescope Magazine. This device is designed to make it possible to photograph the stars with an ordinary camera, a couple of boards, some hinges and an inexpensive clock motor. The one in the photograph is my Type-3 barn door. A simple device like this can produce stunning photographs like this one of the Milky Way. Or this one of comet Hale-Bopp. The barn-door mount whether single-arm or double-arm is a very simple, inexpensive device. The original, single-arm barn door has been around for many years and was, I believe, first popularized by a fellow named Haig back in the 1970′s. It is sometimes called the Haig or Scotch mount. My improvement in the double-arm version was designed to make the tracking very precise, which is important for astrophotography. But even imprecise tracking, like that provided by the single-arm barn door, can be very useful when doing visual observing. I have received many, many letters and emails about the Double Arm Barn Door since the article was first published in 1988. The concept is fascinating to a lot of people. Some could not resist performing a complete mathematical analysis. Others became interested in simply building one to see how well it worked. That was my situation back in 1988. When I started to analyze the situation I wrote a program for my Commodore 64 computer that would allow me to run various examples until I found a successful design. My investigations showed no substantial benefit from the Type 1 and Type 2 configurations. But the Type 3 showed remarkable promise. I got so excited about the Type 3’s astounding results that I neglected the Type 4. I did an analysis of the Type 4, but I made a mistake somewhere along the line and somehow missed its even better performance. Others later investigated the Type 4 and showed that it is even better than a Type 3. I could not wait to try the Type 3. So I quickly built one and took it to a dark sky site. The prototype was not pretty and I can remember the strange looks I got from some of my fellow amateur astronomers when they saw it. (The one pictured is a beautified version.) Anyway, I made several discoveries about the need to “tune” this device. Polar alignment is fairly important and the tracking rate must be tuned at “close to” 1 RPM to optimize the performance. Small construction errors (tiny fractions of an inch) change the needed drive rate slightly. They do not substantially affect the reduction in the tangent error. I remember taking the prototype to the Riverside Telescope Makers Conference in 1987. Most people did not pay much attention to it because it looked so crude. I was very gratified when, at Riverside the next year after the article was published, I saw many “Trott Barn Door Trackers” built by fellow amateur astronomers from all over the country. I think one of them won an award. You can see a Youtube video about this mount here: Here are the two articles from Sky and Telescope Magazine in February 1988 and in April 1989. The following article is from Sky and Telescope Magazine, February 1988 pp.213-214, Copyright 1988, “Sky and Telescope” Magazine, published with permission.Thank you, “Sky and Telescope”! Gleanings for ATM’s Conducted by Roger W. Sinnott THE DOUBLE-ARM BARN-DOOR DRIVE Camera drives of the barn-door or tangent-arm type are simple to make and easy to use, but they have intrinsic error. They try to convert the uniform motion of a nut, traveling along a straight threaded rod, into uniform angular motion of the camera board. Such a conversion is always inexact, and the error can’t be entirely eliminated by any simple mechanical means. However, my new double-arm version is much more accurate than the single-arm type. This is an example of two wrongs making a right. A major portion of the error in a standard barn-door arrangement is cancelled by the error in the coupled secondary arm! The double-arm principle may be applied to a camera drive of any size or construction with the same beneficial results. When I first thought of this idea, I sat down with a pencil and paper and tried to analyze the situation mathematically. An hour later I found myself staring at a page filled with insoluble equations. A numerical approach was obviously called for, so I wrote a program for my Commodore 64 computer that would allow me to run various examples until I found a successful design. THE SINGLE-ARM PROBLEM First consider a typical single-arm drive, sketched at top in the diagram at left. Two boards are joined by a hinge at one end and a threaded rod at the other. A motor attached to one board turns the rod at a constant rate, and because the rod is engaged in a nut on the other end the two boards slowly swing apart. For astrophotography, one board is bolted to a tripod or propped up at such an angle that the hinge axis points to the celestial pole. The camera is then fastened to the other board. As the motor turns, the camera tracks the stars — at least that’s the idea. Success depends on how accurately the hinge axis is oriented and on whether the angular motion of the board matches the diurnal drift of the stars across the sky. The Sun moves 15 degrees per hour, the stars 15.041 degrees per hour. There are several ways to attach the motor, threaded rod, and nut so they don’t bind as the boards swing apart. In my drives, I put the motor on a small platform hinged to the end of the camera board. The nut is recessed in the other board so it can tilt slightly to suit the “angle of attack” of the threaded rod. Thus the rod becomes the base of an isosceles triangle. As the motor turns, the rod’s effective length a grows at a constant rate and the triangle fattens with time. Unfortunately, the hinge angle theta does not change at a steady rate, as desired for astrophotography. The camera starts off tracking almost perfectly for several minutes, but then it gradually speeds up, “overshooting” the stars and causing their images to trail in a longer time exposure. Other methods of attaching the motor, threaded rod, and nut behave somewhat differently. For example, the tracking may start uniformly and then slow down. But there’s no escaping the fact that a single-arm drive, when powered by a straight threaded rod and constant-speed motor, fails to allow high-class astrophotography if the exposure time exceeds 10 or 15 minutes. THE DOUBLE-ARM SOLUTION In a double-arm drive the original arm still carries the motor and threaded rod and makes an angle theta with the fixed board, as before. But now we move the camera to an added second arm, driven by the first and inclined to the fixed board at a different angle, phi. We are looking for a smooth, linear change in theta with the growing rod length a. The diagram shows the four possible double-arm configurations and the equations that describe how well they track. In each case there are two fixed dimensions labeled b and c. I define the parameter beta as equaling the ratio b/c. My computer program began by asking me for a trial value of beta. Then, for a series of values of a increasing in small steps, it calculated first theta, then phi, and finally the difference between a uniformly increasing angle and the calculated phi, or tracking error. While I was running this computer simulation of drive type 3, I spotted a long series of very small errors when I plugged in a value of 6 for the parameter beta. The output suggested that tracking would be excellent for nearly an hour! Indeed, the chart above shows that the tracking rate is considerably more accurate than that of a single-arm barn-door drive. Next I built the new double-arm mount and attached it to my stable “tetrapod” base (S&T: October, 1987, page 426). I then bolted on a camera and used a 6 x 30 guidescope to fine-tune the motor’s rate with a power inverter. When satisfied, I opened the shutter and made test exposures with only occasional adjustments of the inverter. The resulting pictures were of comparable quality to those obtained in piggyback photography on a standard equatorial mount. The specifications of my mount are shown below. The 12- and 2-inch measurements may be replaced by any other pairof values, as long as the larger is exactly 6 times the smaller. In construction, I suggest establishing the short distance between the hinges first. Measure the actual hinge spacing as a guide in cutting the L-shaped piece. Those who want to use a different motor or threaded rod than I did can calculate the required base length, r, from this formula: r = 274.8 x (r.p.m./t.p.i.), where r.p.m. is the motor speed in revolutions per minute and t.p.i. is the number of threads per inch. A small corner piece of Teflon is the only contact between the two arms. The camera-arm contact area is covered with linoleum to minimize friction. I like to begin an exposure with the camera arm’s hinge angle near 30″, since this is where the tracking accuracy is highest in the drive that I built. Of the four possible configurations of double-arm drives, I found that only types 2 and 3 had solutions yielding increased tracking accuracy. For type 3, the ideal value of the parameter beta is 6, as already mentioned. But for type 2 the optimum is 0.5. I would appreciate readers’ comments, both theoretical and practical, on the double-arm idea. The following article is from Sky and Telescope Magazine , April 1989 pp. 436-441, Copyright 1989, “Sky and Telescope” Magazine, published with permission. Gleanings for ATM’s Conducted by Roger Sinnott TWO ARMS ARE BETTER THAN ONE The appeal of a barn-door drive is almost irresistible. It is a solid platform for time-exposure astrophotography with a small camera. You get two boards, join them with a hinge on one end, and slowly push them apart by turning a threaded rod at the other. If the lower board is clamped to a tripod, fencepost, or whatever, with the hinge axis pointing to the celestial pole, any camera on the swinging board can follow the stars across the sky. The idea probably got its start in the 1930′s when Harvard astronomer Donald Menzel was mounting solar-eclipse expeditions. Suitable for any latitude, a barn door can carry the most outlandish array of eclipse experiments without needing delicate balance or counterweights. A total solar eclipse never lasts more than 7% minutes, and in that time the tracking accuracy is quite excellent. But such a drive (sometimes called a tangent-arm drive) can’t track forever.Eventually the angular motion of the top board fails to match the linear motion of the screw pushing on it — an inevitable consequence of geometry. The amount of “tangent error” depends on the details of how the screw is attached, but there is no escaping it. Some astrophotographers, seeking error-free time exposures of 20 minutes or more, have explored ways to turn the screw at a progressively different rate to compensate for tangent error as the boards swing apart. Others have introduced a special curved surface (cam) for the end of the screw to push against. Then in Sky & Telescope for February, 1988, page 213, Dave Trott (who now lives at16825 E. Kenyon, Aurora, Colo. 80013) opened a whole new chapter on the subject by exploring a family of simple drives with not one but two hinged arms. One arm pushes on the other. Trott’s article brought a flurry of correspondence to cleanings for ATM’s, the likes of which we haven’t seen since the early days of the Poncet platform in 1977. Some readers headed straight for basement workshops and soon sent photographs taken with their own double-arm drives. Others, whose mathematical curieosity had been piqued, submitted reams of handwritten calculations and printouts. Here we present some of these interesting results from around the world. North Carolina. “It may never reach the accuracy of a large worm-gear drive,” writes veteran astrophotographer Johnny Horne (P. O. Box 297, Stedman, N. C.18391). “But I’m fascinated to see a star sitting on the crosshairs at 80x for 3/4 hour and realize that it’s being kept there by a 1/4-20 screw and some hinges!” Home’s double-arm tracker, pictured opposite, is made of South American hardwood. There are cavities underneath, which house four D cells and a stepper-motor circuit that he assembled from John Holbrook’s instructions in this department for July, 1986, page 80. “The rocker switches on the end make things happen. The motor is set to run at 1 r.p.m., but by bringing another capacitor on line it will run at about 25 r.p.m., which is useful for rewinding after an exposure. Another switch reverses the direction. The dimensions and layout of the boards and hinges are those given by Trott. “From the beginning this drive hastracked very well in right ascension. But I noticed a large periodic wobble in declination. Apparently my motor’s shaft was not concentric with the threaded rod. Greg Gittings of the Raleigh Astronomy Club made a better coupler for me that cured this problem. Greg also machined the small trunnion assembly where the rod meets the lower board, allowing a smooth rocking action as the boards move apart. “Once the polar alignment is on the money, this unit tracks amazingly well. I have been monitoring stars at 80x and 160x with a Celestron C90 scope attached to the pan head. I can see the stepping action of the motor visually, but it doesn’t show on photographs made with a 200-mm lens. Lots of folks would like a Trott drive platform because it is accurate enough for very long, unattended exposures with a long lens and slow, fine-grain film.” Colorado. Dave O. Cox of 3035 Deframe Rd., Golden, Cole. 80401, was among the first readers to send a thorough mathematical analysis of the four double-arm drive types identified in the Trott article (see diagram on page 437).”The double-arm drive can indeed greatly reduce the error of a single-arm drive, Cox writes. “I agree with Dave Trott that the arrangement he calls Type 3 offers the best tracking of the various geometries. However, the ratio of the fixed-arm length, b, to the hinge spacing, c,should equal 3 + 2**1/3, or 6.464…, rather than the 6.0 he found in his computer trials. “This optimum ratio will track to better than 1 are second for an hour! By comparison, the 6.0 ratio maintains the same accuracy for only 26 minutes. Also, I find that it is theoretically better to start with the boards fully closed, rather than opened to 30 degrees as Trott recommends. An optimum tracking accuracy does exist near 30 degrees, but it is a ‘local’ optimum and is not as good as starting at 0 degrees, even when using the 6.0 ratio.”The Type 2 drive also tracks better than a single-arm unit. But here the fixed arm should be 0.464 times as long as the hinge spacing, instead of the 0.5 found by Trott. The Type 2 drive is not nearly as good as Type 3, though. At best it maintains 1-arc-second accuracy for only 24 minutes.” France. “My curiosity was excited when I read about the ‘magic’ ratios, 6.0 and 0.5 in the Trott article,” reports Alain Hairie from 14 Rue du Blanc, 14000 Caen, France. “Why these two numbers? Is it possible to find other mathematical solutions?” Hairie begins by expressing the camera board angle phi as a function of the screw length, a (see the diagram on page 437 again). Rewriting this formula as an infinite series and discarding terms higher than the cubic, Hairie then shows that the Type 2 and 3 drives have one solution each — yielding the same ideal construction ratios given by Cox above. But Hairie continues: “What about Type 1 and Type 4 drives? According to Trott there is no increase in accuracy with these configurations, but as I continued my calculations I discovered yet another magic ratio, 2.0, that applies to his Type 4 drive. “And now comes the surprise: this solution is much better than the others! It can be used for very long exposures, up to two hours and more. The accompanying sketch shows a way to put together a Type 4 drive.” Hairie concludes, “The double-arm principle is most interesting and needs to be analyzed and tested further.” South Africa, D. P. Smits of the University of Cape Town has also explored the accuracies of various configurations. His paper, “A Mathematical Analysis of the Double-Arm Barn-Door Drive,” appears on pages 155-160 of the December, 1988, Monthly Notices of the Astronomical Society of South Africa. Smits notes that all of Dave Trott’s drives, including the single-arm version, have the drive screw mounted as the expanding base of an isosceles triangle. He derives the same three ideal construction ratios found by Hairie (0.464 for Type 2, 6.464 for Type 3, and 2.0 for Type 4). But Smits comments that still other solutions, some equally good, exist when the drive screw pushes straight up from the fixed board and forms a right triangle instead. His address is Dept. of Astronomy, University of Cape Town, Rondebosch 7700, South Africa. Texas. “I thought your readers might be interested in a photograph I took using a double-arm drive,” writes Paul A. Peterson of 1209 Oak Hollow Dr., Friendswood, Tex. 77546 [see page 440]. “This type of drive is a lot easier to make than the Poncet platform that I built last year.”I put it together in a few hours, using scrap wood and a surplus motor. Considering the minimal effort involved, I am very impressed with the quality of photographs that the Trott double-arm drive makes possible.” WHAT DIMENSIONS TO USE? In the February, 1988, article, Dave Trott recommended adopting a 1-r.p.m. motor and 1/4-20 threaded rod. If so, he said, the parts of a Type 3 drive should be spaced so that r = 13-3/4 inches, b = 12inches, and c = 2 inches. How do these values change in light of the further analyses by Cox, Hairie, and Smits? With the hinge spacing fixed at 2 inches, the change of beta from 6 to 6.464 means that the sliding-contact point should lie 12.928 inches from the drive hinge, not 12. But now the camera arm will swing too slowly unless we also move the threaded rod inward; it should be 13.519 inches from the drive hinge. This will produce an ideal Type 3 mounting whose performance is summarized on page 440. Tracking is now virtually perfect throughout the first hour, at the end of which the camera board is “fast” by just 1 are second. Then the error grows. A 1-1/2 hour photographic exposure will have star trails 7 are seconds long. But this is less than 0.0003 inch on film exposed with a 200-mm telephoto lens — virtually imperceptible on any reasonable enlargement. Even a 2-hour exposure with the same lens would have trails only 0.001 inch long on the film. The table compares this performance with a single-arm drive as well as double-arm types 2 and 4. We see that Type 2 is only a little better than a single-arm drive during the first hour. Then it fails miserably. It also needs a rather small drive radius, and rugged construction would be necessary to carry any but the lightest camera. However, Type 4 drives are quite another story. Their performance is truly astonishing. While the table rounds off errors to whole are seconds, a careful calculation using Alain Hairie’s beta = 2 shows that his drive will track to 0.005 arc second for half an hour! At the end of an hour, it is still good to 0.159 are second. The last column illustrates how a slight variation of Hairie’s ratio, to betsa = 2.186, can push the “perfect” tracking all the way to two hours, as long as we are willing to relax our expectations in the middle of the exposure. Now the worst error occurs near the 90-minute mark, and it amounts to just 0.8 are second! The mathematicians have had their say, and now the ball is in another court. Can anyone build such a phenomenally accurate barn-door mount? How precisely must the hinges be spaced from each other and aligned on the celestial pole? Fortunately, perfect construction is not necessary. Atmospheric refraction will shift the stars at least 15 are seconds in a one-hour exposure, even when we photograph the sky overhead. A drive made to track perfectly in one part of the sky will fail to do so in another. But these are problems affecting all telescopes and astro-cameras, not just barn-door drives. Whether you build a Type 3 or Type 4 unit — they’re both excellent — be sure to include a means of adjustment. The last two columns of the table make the point: a half-inch change in spacing c is compensated by a similar change in r. Remember this trick when trying to correct a slight construction error.In the end, the best way to get the tracking accuracy “dead on” is not to move the hinges or tamper with the motor support, but to find the proper motor rate. A synchronous motor’s base rate can be altered slightly with most commercial or home-built drive correctors. Rate adjustment is also simple with the Holbrook stepper-motor circuit. R. W. S. Many thanks to Sky and Telescope Magazine for allowing me to copy these articles on my web page! - Dave Trott Read more about the Double-Arm Drive and get plans in Stephen Tonkin’s wonderful book Amateur Telescope Making , ISBN 1-85233-000-7. Also see a brief discussion in the fascinating book Unusual Telescopes by Peter Manly, ISBN 0-521-38200-9. There are many web pages about this invention: Steven Tonkin’s Page, Steve Gagnon’s Page , Jeff DeTray’s Page , W. Peters’ Page, Evan Williams’ Page , Starnamer’s Blog , Alan Davenport’s Page, Wikipedia: Barn Door , Steve Irvine’s Page, Cloudy Nights, Peter Barvoet’s Flickr page , Nifty Video, Another Nifty Video Here is a great set of instructions to build one: Pentax Forum Barndoor Instructions A unique innovation developed by Rowland Cheshire enhances the accuracy of the double-arm drive as explained at his site. If you are serious about precision, check it out.
A new record set so soon after the previous record of 17.5C in March 2015 is a sign warming in Antarctica is happening much faster than global average Antarctica has logged its hottest temperature on record, with an Argentinian research station thermometer reading 18.3C, beating the previous record by 0.8C. The reading, taken at Esperanza on the northern tip of the continent’s peninsula, beats Antarctica’s previous record of 17.5C, set in March 2015. A tweet from Argentina’s meteorological agency on Friday revealed the record. The station’s data goes back to 1961. Antarctica’s peninsula – the area that points towards South America – is one of the fastest warming places on earth, heating by almost 3C over the past 50 years, according to the World Meteorological Organization. Almost all the region’s glaciers are melting. The Esperanza reading breaks the record for the Antarctic continent. The record for the Antarctic region – that is, everywhere south of 60 degrees latitude – is 19.8C, taken on Signy Island in January 1982. Prof James Renwick, a climate scientist at Victoria University of Wellington, was a member of an ad-hoc World Meteorological Organization committee that has verified previous records in Antarctica. He told Guardian Australia it was likely the committee would be reconvened to check the new Esperanza record. He said: “Of course the record does need to be checked, but pending those checks, it’s a perfectly valid record and that [temperature] station is well maintained.” “The reading is impressive as it’s only five years since the previous record was set and this is almost one degree centigrade higher. It’s a sign of the warming that has been happening there that’s much faster than the global average. “To have a new record set that quickly is surprising but who knows how long that will last? Possibly not that long at all.” He said the temperature record at Esperanza was one of the longest-running on the whole continent. Read the full Guardian article here:
news & tips A collection of helpful articles on teachers and teaching Teacher’s Guide to the Flipped Classroom At one time, education was a passive experience in which students sat in class listening to lectures or reading books. While lectures and textbooks are still part of the educational system, they have taken a backseat to flipped learning. In this educational method, internet technology is incorporated into the classroom, freeing up teachers to help students instead of only lecturing them. Flipped teaching shows a lot of promise so far and is likely to become the prominent teaching model in the future. How Does The Flipped Classroom Work? - Technological pre-learning: In the flipped classroom, the class is not organized around the teacher’s lectures. Instead, students are required to learn the material before class, allowing them to clear up any misunderstandings while the teacher is available. This “pre-learning” is often accomplished online, with the teacher posting instructional videos for students to watch at home. - Reading and writing: Students read textbooks or conduct online investigations outside of class. Teachers have them write reactions to the readings or prepare questions to ask during class time. - Classroom assistance: Once students are in the classroom, they are able to obtain one-on-one help from teachers and teacher’s aides. The teacher may split the students up into groups to work on projects or hold discussions. Meanwhile, he or she is in the room offering help to students who have questions or who are struggling. - Enrichment activities: The classroom experience may also be supplemented by labs, hands-on projects or field trips. Activities like these are often successful in capturing student interest, making them more likely to continue to study concepts on their own. What Are The Advantages Of The Flipped Classroom? The biggest advantage of the flipped classroom approach is the element of active participation. When taught in the traditional lecture-and-note manner, students often grow bored and find themselves interested in everything but the content covered by the teacher. Proponents of the flipped classroom argue that it is better to let students learn the material at their own pace in a comfortable home environment with the help of readings, videos and online material. Then they can pursue interactive learning activities in the classroom while receiving personalized attention from the teacher. The other advantage of the flipped classroom is its incorporation of new technology. In today’s world, both children and adults need to be technologically literate in order to succeed in the workforce. When technology is used in everyday classroom activities, students are more likely to feel comfortable using it on their own. Additionally, research suggests the use of new technology may improve test scores. In one study conducted at Amelia Earhart Middle School in Riverside, California, middle school students’ scores on algebra tests increased greatly when they used iPads to learn material instead of following the traditional textbook and lecture approach. Other studies involving tablets and Smart Boards have also shown promising results. Researchers expect future pilots with both Smart Boards and tablet computers to prove successful.
This Easter weekend, I answer one of the more disparaging questions I’m asked by secularists. That is: “How can a true scientist believe in the gospel message of Christ?” The answer begins with a proper definition of science. Science is the study of nature through empirical evidence. A truly scientific theory, by definition, must be testable by repeatable observations or experiments. Yet there are many observations in nature that cannot be scientifically tested. Take the creation of the natural world. As explained by the big-bang theory, all the matter and energy of the universe was compressed into a cosmic egg that inexplicably exploded. But nobody knows where the cosmic egg came from, or how it arrived. Neither has a single important prediction of this theory been confirmed. Even worse, it contradicts multiple principles, including the first and second laws of thermodynamics and the law of conservation of mass. That means the big-bang theory is largely a faith-based idea. That somehow does not deter a great many scientists from accepting the theory as true. Obviously, their conclusions are based on corroborating observations that are directly testable, such as the expanding universe. Similarly, the gospel message is a faith-based belief characterized by precepts that challenge the laws of physics. Still, there are scientific principles corroborating the biblical text including the gospel message. For example the gospel message states: - God created man and man sinned; - As descendants of Adam, we all have his sin nature, i.e., we have all sinned; - The penalty of sin is death; - Christ was born of a virgin, i.e., without the sin nature of Adam; - He lived a sinless life, died and rose again; - By conquering death and the grave, He paid the price for all sin so that anyone who accepts and believes on Him might live. Now, without debating the message itself, consider two corroborating scientific principles. “In the beginning” God commanded Adam and Eve to be fruitful and multiply – populating the earth. According to the Bible and science, the power to create life resides in the “seed” or sperm of man. The sperm fertilizes the egg and creates a new life. Scientifically speaking, the ability to reproduce is characterized by replication of the genetic code, which is how our heredity is passed down from generation to generation. Therefore, since we are all descendant from Adam, his sin nature would be passed to all men based on the principle of heredity. Now consider the birth of Christ. After it is revealed that Adam and Eve ate of the forbidden fruit God said, “I will put enmity between you and the woman and between your seed and her seed.” Here the “seed of woman” can only be an allusion to a future descendant of Eve who would not have a human father. Biologically, a woman produces no seed or sperm, and Biblical usage almost always speaks of the seed of men. This promised Seed would have to be miraculously implanted in the womb. In this way, He would not inherit the sin nature, which would disqualify every descendant of Adam as the perfect sacrifice for sin. That means this prophecy not only anticipates the future virgin birth of Christ, it reflects an understanding of genetic technology that mere man did not possess until thousands of years later. One other important point – no blood passes from the mother to the child during development. Rather, the child’s circulatory system is formed and works independently of the mother. That means the blood of Mary that would have been marred by sin did not mix with the perfect blood of Christ shed on the cross. Not impressed? Then I ask you, “Who among us today could write a ‘story’ on the creation and sin nature of man as well as a plan for salvation consistent with scientific theories we won’t discover for thousands of years?” Bottom line: Secular scientists and their followers regard themselves as unilateral guardians of logic and scientific thought no matter how far-fetched and unsupported their theories. Christians on the other hand are lampooned as dim-witted and bent on impeding science with irrational bias. But it was Kepler who said the study of science was “thinking God’s thought after Him.” And it was Newton that concluded “Atheism is so senseless.” Truth is we all have our biases. But no one has to check their brains at the door of the gospel or the biblical text. The Bible makes reference directly or indirectly to countless scientific principles, including the second law of thermodynamics, the expanding universe, rare medical conditions and more. Its accuracy and insight is unparalleled as a document written years before these concepts were understood by man. That’s scientific integrity.
Earlier this year, a new Health Department report cited nitrate pollution as a “growing threat” to Minnesota’s drinking water, pinning the blame on fertilizers used for row crop production. Governor Mark Dayton characterized the water quality issue as a “widespread problem,” calling for anti-pollution legislation and warning that, “bad water threatens our health, our economy, and our future.” Dayton is right. But when it comes to problems stemming from the current industrial food system, we need to get beyond cleaning up the mess. At some point, we have to ask: if our food system causes nitrate pollution, climate change, obesity, diabetes, and biodiversity loss—while undermining the very soil quality it depends upon for its own long-term viability—isn’t it time to find a better way? This “better way” for our food system is the work of a branch of interdisciplinary science called agroecology, which approaches farming as an ecological and social challenge. Agroecologists work with producers to create and maintain farms that rebuild their own soil, capture their own nutrients, and host pollinators and beneficial insects—all of which contributes to providing the farmers with profitable, sustainable livelihoods. Further, by mimicking natural systems, agroecology greatly improves the environmental performance of agriculture, while producing sufficient yields and improving resilience for farmers and farm communities. If you are wondering why you haven’t heard of such an important branch of science, you’re not alone. Although agroecology offers a promising, proven solution to the problems with our food system, it remains woefully underfunded in comparison to other agricultural research, thus depriving farmers of critical information that could help them solve problems like nitrate pollution. How badly are we neglecting this promising solution to nitrate pollution and other problems caused by our current food system? At this year’s joint meeting of the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America—which is being held this week in Minneapolis—researchers (including co-author Carlisle) will discuss the results of a study newly published in Environmental Science and Policy, which quantifies the share of public agricultural research funding that supports sustainable approaches. The findings are sobering. Although this research analyzed one of the most sustainability-oriented sources of public agricultural research funding—USDA’s National Institute of Food and Agriculture—the study found that just 15% of projects funded in 2014 even considered agroecological practices. To put this in perspective, the total sum for analyzed projects containing any agroecological practices represented just 1.5% of USDA’s full Research, Education, and Extension budget. What does this mean for Minnesota? Earlier this year, the proposed implementation of 50-foot buffer strips along nearly every waterway in the state led to a “buffer battle” that Dave Orrick at the St. Paul Pioneer Press referred to as “Mark Dayton vs. farmers.” The fight demonstrated why agroecological approaches—that consider both farmer livelihoods and environmental benefits—are needed. For the most part, Minnesotans’ discussion of the proposed buffer regulation has played out as an apparent zero-sum game—pitting pollution control against farmers’ incomes. Yet just across the border, Iowa State agroecology researcher Matt Liebman has found that a holistic approach—diversifying crops and installing modest buffer strips—can lower water pollution from pesticides by a factor of 200, decrease pollution from nitrogen runoff, and also maintain or exceed previous profits and yields. Converting to the systems Liebman and his team have developed is no easy task, but here’s the question: if agroecology can drastically reduce the herbicides and nitrogen in our water while preserving farmers’ yields and profits—shouldn’t agroecology get more than 1.5% of research budgets? Indeed, we think funding agroecology research is a critical, win-win step toward fixing the food system. As Congress looks ahead to the next Farm Bill and candidates stump for 2016, this is an objective farmers, eaters, and policymakers can all “come to the table” to achieve.
Who Knows About Human Rights? Survey Evidence from Four Countries Sur: International Journal on Human Rights, Vol. 20, 2014 37 Pages Posted: 12 Mar 2014 Last revised: 3 Sep 2014 Date Written: March 7, 2014 This article presents early results from the Human Rights Perception Polls, representative surveys on human rights attitudes conducted in 2012 in Mexico, Colombia, Morocco and India. We investigate statistical associations between two measures of human rights familiarity – exposure to the term, “human rights,” and personal contact with human rights workers – and four measures of socio-economic status (SES): education, income, urban residence, and internet use. Controlling for sex and age, we find higher SES is generally associated with more human rights exposure and contact. Interpretation of these results’ practical ramifications, however, depends on readers’ underlying view of the human rights mission. Should human rights groups engage chiefly with society’s poorest and most vulnerable populations? If so, our results suggest room for improvement. If readers instead believe human rights groups should focus on elites, advocate high level reforms, or link disparate groups, however, our results offer less cause for concern. Keywords: Survey data, human rights, public opinion, Morocco, Mexico, India, Colombia, elites, grassroots Suggested Citation: Suggested Citation
Definition: the cause of bovine babesiosis in western and central Europe; vector tick is Ixodes ricinus; it has caused human babesiosis in splenectomized patients in Europe; also found in reindeer. Disclaimer: This site is designed to offer information for general educational purposes only. The health information furnished on this site and the interactive responses are not intended to be professional advice and are not intended to replace personal consultation with a qualified physician, pharmacist, or other healthcare professional. You must always seek the advice of a professional for questions related to a disease, disease symptoms, and appropriate therapeutic treatments. Search Stedman's Medical Dictionary Examples: glitazone, GI cocktail, etc. © Copyright 2017 Wolters Kluwer. All Rights Reserved. Review Date: Sep 19, 2016.
blood group antigen Definition: generic term for any inherited antigen found on the surface of erythrocytes that determines a blood grouping reaction with specific antiserum; antigens of the ABO and Lewis blood groups may be found also in saliva and other body fluids; the genes controlling development of blood group antigens vary in frequency in different population and ethnic groups. See also Blood Groups Appendix. Synonym(s): blood group substance Disclaimer: This site is designed to offer information for general educational purposes only. The health information furnished on this site and the interactive responses are not intended to be professional advice and are not intended to replace personal consultation with a qualified physician, pharmacist, or other healthcare professional. You must always seek the advice of a professional for questions related to a disease, disease symptoms, and appropriate therapeutic treatments. Search Stedman's Medical Dictionary Examples: glitazone, GI cocktail, etc. © Copyright 2017 Wolters Kluwer. All Rights Reserved. Review Date: Sep 19, 2016.
1. Explain the ideological division between the USSR and the USA that caused the Cold War. 2. Take one incident from the Cold War and explain how this increased tensions between the two Superpowers. 3. Explain the impact of the collapse of the USSR on the development from a bi-polar to multi-polar world.
Editor's note: Jerrilynn Dodds is the Dean of Sarah Lawrence College in New York and author of New York Masjid: the Mosques of New York (2002). Her most recent book is Arts of Intimacy: Christians, Jews and Muslims in the Making of Castilian Culture, which she co-authored with Maria Menocal and Abigail Krasner Balbale. (CNN) -- It's hard to think of a better place for a mosque today than lower Manhattan, near to ground zero. To support the siting of a mosque there is not just deeply American--a declaration of the freedoms we stand for -- it is the continuation of a long and established New York tradition of mosque-building. In fact, by any historical measure it is absurd to see Cordoba House, a community center that will include a mosque, as a kind of hostile and exotic cultural invasion of the lower east side. Mosques have been part of New York's rich architectural and religious mix for over a century, and today hundreds of thousands of Muslims, many whose New York roots go back generations -- attend the city's more than 100 mosques in the five boroughs. The Muslims who built these mosques are New Yorkers, blameless in the events of September 11, 2001, and linked to other New Yorkers through the deep shared sense of loss and pain evoked that day. Their mosques, already part of our urban identity, bear witness to the strength of our freedoms, as will the Cordoba House center. It is likely that Muslims have prayed in New York City for much of its history, and particular buildings have been dedicated to Muslim prayer for over a century. Muslim slaves from Africa who lived in New York no doubt had places to pray as early as the 18th century, but the first mosque building in New York was likely the one belonging to the American Mohammedan Society in 1907 on Powers Street in Brooklyn. The Islamic Mission of America constructed its own mosque in 1939, and in 1947 purchased the brownstone where the Masjid Daoud can still be found today. The number of mosques in the city began to increase significantly in the 1960s after the ratification of the 1965 Immigration Act, which increased immigration from non-European countries with Muslim populations. Over time, they would range from modest basement prayer halls to elaborate architect-designed buildings. One small mosque in Brooklyn is composed of a dozen neighbors who take turns leading prayer. The first mosque of a new Muslim community in New York, for example, might simply be a suburban house, like the split-level in Richmond Hills, Queens that served as the Masjid Hazrat-i-abu Bakr in the 1990s. With time, the community might gather the funds to construct a more elaborate building, like Masjid Hazrat-i-abu Bakr's grander building today at the same location. Many mosques in New York City are built and financed by the community members themselves; some donate materials or work or money. The Ali Pasha Mosque in Astoria, and the Albanian Cultural center in Staten Island were completed in the 1990's with the help of the contracting and manual labor of their communities. A new mosque can result in the building up of a neighborhood. Fatih Camii was fashioned from an old building in Sunset Park, Brooklyn, and a representative of the New York Police's 66th Precinct commented to me in the 1990's that the mosque had revitalized the neighborhood: "Since the congregation renovated the building and began to function, the entire neighborhood has profited." This is surely the case with the Masjid Malcolm Shabazz, a renovation of the former Lenox Casino in Harlem by architect Sabbath Brown in 1965. There the addition of dome marks the presence not only of a mosque, but a school and other community services that make it a beacon in the neighborhood. The mosque's community has been instrumental in constructing low income housing and supporting the economic revitalization of Harlem. Mosques as community centers all around New York provide day care, help with small business start up, rooms for events, classes in English and other languages, gyms and recreational facilities for their neighborhoods. New York's newly designed mosques are real products of American pluralistic culture. The first mosque in New York designed from the ground up was probably Masjid Alfalah in which the community collaborated with a local Korean-American architect William Park, in 1983. Such grand mosques as the Albanian Cultural Center in Staten Island, or the modernist Islamic Cultural Center on Manhattan's East Side (designed by the famed architectural firm Skidmore, Owings and Merrill) are monuments to the transformations wrought by Muslim communities: They are American mosques. Yesterday New York City's landmarks commission voted unanimously to deny historic status to the Park Place site, clearing the way for construction of Cordoba House, also known as Park51. The name Cordoba House, though, is particularly fitting -- an evocation of the rich interactions of Christians, Muslims and Jews in Medieval Spain. Medieval Spain was not often a paradise of tolerance and peace. But where peoples lived together, the understanding spawned by that coexistence gave the lie to the notion that Muslims, Jews and Christians must by nature be opposed, and created a more cohesive, fecund, peaceful and plural society. The Muslims who pray in New York's mosques are Americans who, like Catholic or Jewish immigrants before them, seek to be part of the city, part of this country. The more than 100 mosques of New York are visual signs, not only of the presence of these Muslim Americans, but also of the religious freedom that distinguishes the American way of life. By their very existence they defeat the hostile, polarized vision of Islam and America that the authors of the WTC attacks hoped to engender. If we wish to stand in defiance of the unspeakable death and destruction of 9/11, we could not do better than to welcome Cordoba House in the very neighborhood of lower Manhattan where those unspeakable acts occurred, as part of the city's long history tradition of mosque-building. The opinions expressed in this commentary are solely those of Jerrilynn Dodds.
Adjust font size: LONDON, England (CNN) -- A car that can drive itself is the fantasy of any designated driver, but the dream of owning a vehicle that does all the driving while you sit back and relax is one step closer to reality, as in-car artificial intelligence being developed by a team at Stanford University is ready to be used on city streets in the ultimate test of robot cars. Winning the Defense Advanced Research Projects Agency (DARPA) Grand Challenge last year with a car called Stanley, Sebastian Thrun and his team at the Stanford Artificial Intelligence Laboratory developed a form of robotics that went beyond being purely reactive. Rather than simply processing data and reacting accordingly, the in-car A.I. could evaluate data in milliseconds and decide whether it was correct or not. Many of the early prototypes of robot-controlled cars were literally stopped in their tracks because of faulty information - mistaking tumbleweed for a rock for example. "What we have in Stanley is a revolution in the field of artificial intelligence. We now have ways to make robots understand the environment and make decisions about it, even if that environment is really complex," Thrun told CNN. Thrun and his team mounted a series of sensors on the roof of their Volkswagen Touareg that included a radar, laser range-finders, stereo cameras and GPS receivers. Crucially it was also equipped with a machine that learnt algorithms to mimic the behavior of a human driver. Sending data to the bank of computers housed in the trunk of the car ten times every second, information was processed and sent to the brake, throttle and steering wheel controlled by tiny motors. Thrun's car completed the 132-mile desert course in just less than seven hours. Only four other cars out of the 23 that were competing for the prize finished the course. The rugged terrain of the Mojave Desert proved tough enough, but the next challenge will push in-car A.I. even further. Next year's DARPA Urban Challenge will pit robot racers against each other in negotiating a 60-mile course through a simulated city environment. "After the success of last year's event, we believe the robotics community is ready to tackle vehicle operation inside city limits," said DARPA Director Dr. Tony Tether. The contestants will have to obey traffic regulations and cope with all the aspects of city driving, from merging with other vehicles and changing lanes to observing stop signs and parking. Thrun's team will be competing again, as will ten other teams from A.I. research institutes, including last year's Grand Challenge runners-up from Cornell University. All will be trying to perfect the driver-less car that can cope with an urban environment without remote control. "The urban environment is very complex, the car really has to understand what is around it -- pedestrians, buses, bicycles -- and understand how these things interact with the car and make decisions in a split second," said Thrun. Traffic in a city is hectic at the best of times, but Thrun is not daunted by the task of trying to bring order to this chaos. "There is actually a lot of order, even with urban traffic. There are road markings and traffic rules and there's the behavior of other road users that can be anticipated. We're currently coding all this information into an artificially intelligent robot. I can't say it's an easy ride, but I'm confident that it can be achieved." Ultimately Thrun and the other teams competing at next year's Urban Challenge hope to develop cars that exceed the capabilities of human-driven vehicles and dramatically increase their safety. Every year around 43,000 people are killed in road traffic accidents in the United States, 90 percent of which are caused by human error. Many of today's cars have elements of A.I., from radar-guided cruise control systems and GPS collision sensors that enable cars fitted with the same equipment to communicate with each other. Toyota has developed technology that uses cameras to detect the curb when parking and turns that wheel automatically to reverse into the right spot. While the technology is being developed that could one day make the car autonomous and hopefully safer, there will still need to be a great leap in faith among road-users to trust cars to drive themselves. "There is clearly an issue of making people trust their lives to the hands of a robot. We are already doing this in the field of aviation with auto-pilot, but we have to get the same confidence for cars," said Thrun. As well as consumer confidence issues a further hurdle to the adoption of robot cars is the legal aspect. The team behind Stanley ultimately envisage a car that has a button on the dash board that would switch the vehicle from manual to computer controlled driving, so drivers can choose between the experience of driving themselves or having a computer-driven limousine. However, there is no legal precedent to what would happen if a computer driven car was to be involved in an accident. The main problem would be to discern where responsibility lies should an accident occur. The consensus among traffic analyst is that more autonomous road vehicles would be safer than their human-driven versions. Traffic safety expert Chris Wright from Middlesex University in the UK is among them and believes that in 20 years time car will be much more like robots than the cars we drive today. Thrun is even more confident that the technological hurdles will be overcome. "I would say that in 10 years we can have cars that can drive themselves on highways, and before that we will have cars that can park themselves at low speed. The progression to a commercial product may take a little longer, but it's going to happen." "Stanely" won the 2005 DARPA Grand Challenge and has laid the foundation for robot-controlled cars.
« السابقةمتابعة » where is mine honour; and if I be a master, where is my fear?" Again; "Have we not all one Father? Hath not one God created us? Why do we deal treacherously every man against his brother?" Thus the prophet Isaiah, in pleading with Jehovah, complains, "Thou hast hid thy face from us, and hast consumed us because of our iniquity. But now, O Lord! thou art our Father; we are the clay, and thou our potter; and we are all the work of thine hand." In these, and a few other instances, the Israelites were occasionally reminded of a filial relation, subsisting between them and their Creator; but the leading character by which he manifested himself to them, was not that of a Father. He sometimes styled himself the God of Abraham, of Isaac, and of Jacob, in honour of their faith and piety; sometimes the God of Israel, as they were the select and chosen people. When Moses received the commission to liberate the Israelites from their bondage, "God said unto Moses, I appeared unto Abraham, unto Isaac, and unto Jacob, in the name of God Almighty; but by my name Jehovah was I not known unto them." The great I AM, the true, the living, the universal Sovereign; in contradistinction to the despicable idols, the nonentities, to which the corrupt imaginations of an ignorant world, had transferred all authority and all honour. To neither Jews nor Heathens, therefore, was the title of Universal Father clearly promulgated, in the manner which characterises and distinguishes the dispensation that is emphatically termed, a Dispensation of Grace. This honour, the most exalted which can possibly be conferred upon the human race, is introduced by the promised Messiah. He takes the lead in this new designation; as he is the medium through whom its blessings are imparted to us. Adam, by his disobedience, lost his title to be the head of a favoured race. The righteous Noah had the honour of introducing a new progeny. Abraham, by his ready obedience, became the father of the faithful. The wise, the meek, and intrepid Moses was qualified, and appointed, to rescue the people of God from captivity; to become their legislator, to watch over their morals, and to conduct them to the Land of Canaan. These were the faithful Servants of the Most High; and they were greatly honoured. But " "God, who at sundry times and in divers, manners spake in time past unto the fathers by the prophets, hath, in these last days, spoken unto us by his Son, whom he hath appointed heir of all things.' After this divine Messenger had been initiated into his office, by the baptism of John, he received," from God the Father, honour and glory; when there came a voice to him from the excellent glory, This is my beloved Son in whom I am well pleased."* This unequalled mark of approbation from heaven was repeated at the hour of his transfiguration : "Behold a bright cloud overshadowed them, and behold a voice out of the cloud which said, This is my beloved Son, in whom I am well pleased; hear ye him."" For this is he of whom the prophet spake, "Behold my servant whom I uphold, mine elect in whom my soul delighteth; I have put my spirit upon him, he shall bring forth judgment to the Gentiles," &c. &c.† Being thus authorised and sanctioned to consider God as his heavenly Father, the language he reverentially adopted, manifests his habitual sense of the exalted honour. When he speaks of himself individually, it is under the humble appellation of the Son of Man; but as he was declared to be the Son of God, with power; in his official, or mediatorial character, he delighted in * 2 Pet. ch. i. 17. † Is. ch. xlii. v. 1. the title. To his Father he ascribes all the powers with which he was invested. "Verily I say unto you, the Son can do nothing of himself, but what he seeth the Father do; for what thing soever he doth, these also doth the Son likewise. For the Father loveth the Son, and sheweth him all things which himself doth."* All his addresses to heaven were as praying to the Father; and from the Father he expected all his consolations and support. In the agonies of his mind, previous to his being taken before his judges, as an afflicted, but obedient Son, he prayed, "saying, Father, if thou be willing, remove this cup; nevertheless, not my will, but thine be done;" and he described his ascension as going to the Father. Nor does he appropriate this honoured title to himself exclusively, in consequence of the perfection of his obedience. That Being whom he denominates his Father, he uniformly considers as the Father of his disciples also. He exhorted all who came to him, in order to receive instructions from him, "Let your light shine before men, that they may see your good works, and glorify your Father who is in heaven." "Love your enemies, that you may be the children of your Father who is in heaven. Be ye merciful, as your Father who is in heaven * John v. 19. "Call no man Father," says he, upon earth, for one is your Father, who is in heaven." "When ye pray, say our Father who art in heaven." As he was taking a final leave of his disciples, he consoled their minds with this assurance, "I ascend to my Father and your Father, my God and your God."* The Apostles, after they had been fully instructed in the nature of Christianity, adopted a similar language. The usual salutation of St. Paul in his Epistles is, "Grace be with you, and peace from God our Father, and the Lord Jesus Christ. All his admonitions, reproofs, exhortations, and encouragements, are in perfect unison with the declaration made in his Epistle to the Romans. "As many as are led by the Spirit of God, they are the sons of God. For ye have not received the spirit of bondage again unto fear (which was the prevalent spirit, and the predominant sensation under the Jewish economy,) but ye have received the spirit of Adoption, whereby we cry, abba Father. The spirit itself beareth witness with our spirits, that we are the children of God; and if children, then Heirs; Heirs of God, and joint Heirs with Christ." The Apos * Ch. viii. v. 14. + See Note A.
Stop & Yield Signs Assign Right of Way Stop and yield signs assign the right of way at high volume intersections and enhance traffic flow. Stop and yield signs also create "through" streets, which are unimpeded by cross traffic. Speeds tend to increase on through streets and driver attentiveness tends to decrease. Right of way is determined by the "right hand rule" at low volume intersections which do not have a stop or yield sign. In these situations drivers are to approach intersections with caution and yield the right of way to the driver on their right. State Law requires that prior to installing "Stop" or "Yield" control, an engineering study must be completed. This study should document that the installation of the signs is necessary to improve the over-all operation of the intersection. The criteria to warrant signing are specifically documented in the national "Manual on Uniform Traffic Control Devices" (MUTCD). Stop Signs, Speeding & Accident Prevention At times people request that stop signs be placed at an intersection in response to a recent accident or to slow down traffic. "Stop" signs may not be the best or only solution to a problem. Research shows that intersections with stop signs generally experience more accidents than those without. Some drivers actually speed up between stop sign controlled intersections. Frequently, people experience accidents at intersections because something blocks a driver's line of sight for cross traffic. Shrubs, parked cars, signs and other obstructions may be the culprit at a corner. If that is a problem, it can usually be corrected fairly easily. Complaints about intersections are investigated by the Department of Public Works and Utilities. They will review the history of the accident patterns and note any changes in traffic volume. They also survey the site for obstructions or other conditions such as hills or curves. After a thorough analysis, traffic engineers can make specific recommendations for making an intersection safer. Driver Cooperation is the Way To Go It is the mission of the Lincoln Public Works and Utilities Department to provide citizens a safe, convenient, accessible and affordable transportation system. Compliance with the law is the best way to keep traffic flowing and citizens safe. Approach intersections with caution, observe the "right-hand rule" and always be a defensive driver. It's the "Way to Go." For more information call the Public Works and Utilities Deparment Engineering Services Division
Scientists pinpoint genetic risk factors for asthma, hay fever and eczema Press release issued: 30 October 2017 A major international study has pinpointed more than 100 genetic risk factors that explain why some people suffer from asthma, hay fever and eczema. The study was led by a team of scientists, including Dr Manuel Ferreira from QIMR Berghofer Medical Research Institute, Brisbane Australia and Dr Lavinia Paternoster, MRC Integrative Epidemiology Unit, University of Bristol, UK. It has been published in the prestigious journal Nature Genetics. Dr Ferreira said this was the first study designed specifically to find genetic risk factors that are shared among the three most common allergic conditions. "Asthma, hay fever and eczema are allergic diseases that affect different parts of the body: the lungs, the nose and the skin," Dr Ferreira said. "We already knew that they were similar at many levels. For example, we knew that the three diseases shared many genetic risk factors. What we didn’t know was exactly where in the genome those shared genetic risk factors were located. "This is important to know because it tells us which specific genes, when not working properly, cause allergic conditions. This knowledge helps us understand why allergies develop in the first place and, potentially, gives us new clues on how they could be prevented or treated. "We analysed the genomes of 360,838 people and pinpointed 136 separate positions in the genome that are risk factors for developing these conditions. "If you are unlucky and inherit these genetic risk factors from your parents, it will predispose you to all three allergic conditions." Senior author, Dr Paternoster said: "This study has been a huge international effort, bringing together scientists and data from around the world, including the Children of the 90s study based in Bristol. "It’s really exciting that we have been able to find so many genetic variants that influence these diseases which affect so many people. "Some of the genes implicated in our study already have drugs available that can target them. So these drugs (currently used for other conditions) may be effective in treating allergic conditions. The next step is to test these in the laboratory." The study involved collaborators from Australia, Germany, the Netherlands, Norway, Sweden, the UK and the US. 'Shared genetic origin of asthma, hay fever and eczema elucidates allergic disease biology' by Manuel A Ferreira et al in Nature Genetics
Eighty per cent of France’s population lives in urban areas which produce 70 per cent of French greenhouse gas emissions. These are the selected towns and cities: - Plaine Commune - Nantes Saint-Nazaire - La Reunion - Pays Haut Val d’Alzette The plan aims to “show that it’s possible to grow, to welcome new inhabitants and do it in a sustainable way,” government adviser Emmanuelle Gay told RFI. The schemes will represent the “top level of sustainable urban development”, Gay says. The “cities of tomorrow” scheme will receive 750 million euros out of the 35-billion-euro “big loan” that President Nicolas Sarkozy launched in an attempt to revive the economy. The 13 towns and cities, selected from 19 candidates last November, have until the 15 March 2011 to present the outline of their projects, including timescales, repayments of the loan and economic impact. The plans must include “the use of different sources of renewable energy” and “integrated collective multimode transport”, Benoist Apparu, the Secretary for Urban Planning and Housing, said last Tuesday. The plans also fit into France’s obligations under the European Union’s 20-20-20 target which aims to reduce greenhouse gases by 20 per cent of 1990 levels, provide 20 per cent of energy from renewable sources and cut energy consumption by 20 per cent – all by 2020. Environmental campaigners have mixed feelings about the government’s plans. “The objective of developing a more sustainable vision of urban planning is of course positive,” says Marion Richards, from the Climate Action Network France. “France is looking to catch up with other European countries in terms of sustainable urban planning.” The planning is “integrated” and takes account of existing or planned projects, Richards says. And it sidesteps the usually problematic system of French commune administrations. Rather than dealing with smaller councils, parishes or municipalities, which number more than 36,000 in France, the 13 projects delegate the urban planning to a higher level. But the different schemes are focused on centralised development, which critics say encourages urban sprawl. And, they say, contradict other schemes financed by Sarkozy’s stimulus package. “It’s really focusing on the construction of new buildings,” says Richards. It does not include provision for “retrofitting buildings” which she says should be a necessary component. “Only 15 per cent of this ‘big loan’ is aiming to support sustainable development - then you also have financing for new highways and airports,” she adds. A second call for projects worth 250 million euros will be made before the end of the year, topping the fund up to one billion euros.
posted by kim . find all the horizontal and vertical asymptoes of the functions When the denominator equals zero, there is a vertical asymptoe When the numerator becomes constant as x approaches very large, there is a horizontal asymptoe. For instance, in the second q(x)=(x-1)/x= 1-1/x as x gets very large, it becomes 1-0 or 1 In the first, f(x)=x/(x-1)= 1/(1-1/x) (multiplied numerator and denominator by 1/x) f(x)= 1/(1-0) when x is very large, or f(x)=1 x = 1 for the first one x = 0 for the second yes, for the vertical asy. so what about this one g(x)=1.5^x No vertical, no horizontal. how can you tell it has neither As Mr. Bob mentioned: When the denominator becomes zero when x=c and c is finite, there is a vertical asymptote. A horizontal asymptote is typically identified by the fact that lim f(x) approaches a constant value as x->∞ or -> -&infin. In the case of: there is no denominator that makes g(x) infinite when x is finite, so no vertical asymptote. g(x) becomes infinite when x->+∞, so no horizontal asymptote on the right. But on the left..., as x->-∞, g(x) approaches zero, so what do you think?
Hi! I'm a junior-high student and have been wondering about this question for a while, asking many of my teachers and never receiving a satisfactory answer. I thought a linguist may have the answer to this question, so here goes: As many know, some languages have nouns that must be said in certain ways based on their ''gender''. For example, in French, the words ''la'' (feminine) and ''le'' (masculine), both mean the same thing, or the article ''the'' in English--yet they are used in different circumstances based on the noun they are introducing. In French, for example, the word ''robe'', meaning dress (as a noun), is always introduced with a feminine article such as ''la'', because the word ''robe'' is inherently feminine. For the word ''chapeau'', meaning hat, the word is always masculine and so must be preceded by a masculine article, such as ''le''. In English this is not really the case. Nouns do not have a gender, unless they are gender specific (such as ''she'' rather than ''he''); and even then, the gender of the noun does not change the way we introduce it. However, I have noticed that in some English words and phrases, nouns do seem to be more gender specific than others. Many of the following examples were used in earlier times and are not as commonly used today: ''There she blows'', an expression used to refer to a whale blowing water, refers to the whale as female, even though in today's world we mostly refer to whales as ''it'' (gender neutral). Occasionally we also hear ships being referred to as ''her''- a captain might say about his ship, ''I call her the Santa Maria''. Other examples are some nouns in English having endings on them when they signify different genders, even though they essentially mean the same thing. ''Waiter'' and ''waitress'' is one example, so is ''host'' and ''hostess''. Although I realize the gendering of words in these English examples is not exactly the same as how it is done in French, my questions remain: First of all, what is the term for languages that have nouns with specific genders, such as in French? There must be an easier way to say it than my clumsy description above. Secondly, was English previously a language that had those gender-specific nouns, or are those example phrases above just some of the many idiosyncrasies of this complex language? I thank you in advance for any insight you have on these To elaborate on my colleague's answers: 1) English used to have gender and when it did, all nouns were classified as masculine, feminine, neuter. This classification controlled grammatical features such as pronoun replacement (he/she/it) and adjectival agreement. Today although we may personify objects and animals as if they were people (e.g. I name all my cars), it's generally optional. Therefore English no longer has Also endings like -er/-ess generally only apply to objects which have a sex like people or animals. English does not apply them to pens and pencils. 2) As Prof Fidelholtz noted, other classification systems exist. Many Sub-Saharan languages have complex systems which distinguish between animate (alive) vs non- animate where animates are further broken down into shape types and one (Fula) has an ending just for cows. 3) The evolution of grammatical gender is a little murky although most believe that Indo-European began with the common animate/inanimate distinction to which feminine gender was added later. An interesting case of a gender system possibly in progress are Chinese/Vietnamese classifiers. When objects are counted, they must be counted as one of "something" and these "somethings" could evolve into a more integrated classification system. I also suspect classification can be random depending on ending. In most Indo- European languages, gender can be partially predicted based on the word ending or final sound and it's very common for nouns to get reclassified based on a reinterpretation of a pattern based on an ending. There are also cases like "fraulein" 'girl' which is semantically feminine, but classified as grammatically neuter because of the ending -lein. The ending here is the key, not the semantics.
Published Resources Details Conference Paper - The construction of aerodromes in the South Pacific during World War II by NSW engineers - From the Past to the Future: 18th Australian Engineering Heritage Conference 2015 [Newcastle] - Engineers Australia, Barton, Australian Capital Territory, 2015, pp. 203-212 Civilian engineers from NSW in the construction of two strategically important aerodromes in the South Pacific during World War II: Tontouta (New Caledonia) and Norfolk Island. Construction work on both aerodromes was undertaken with contingents from NSW, but supported by the USA Army Air Corps with personnel and equipment. The aerodromes were initially planned as defensive measures, however, their role rapidly changed to an offensive role after the Battle of the Coral Sea in May 1942. In this capacity they acted as re-fuelling/supply points for ferry aircraft to support the offensive campaign in pursuing the Japanese further north. The paper chronicles the civilian engineers' contribution to the war effort and is supported by extensive archival material sourced from the construction period. Related Published resources - From the Past to the Future: 18th Australian Engineering Heritage Conference 2015 [Newcastle], Engineers Australia, Barton, Australian Capital Territory, 2015, 230 pp, https://search.informit.com.au/browsePublication;res=IELENG;isbn=9781922107435. Details
Stereochemistry of Caffeine Molecule The nitrogen atoms in the caffeine molecule are all essentially planar. Even though some are often drawn with three single bonds, the lone pairs on these atoms are involved in resonance with adjacent double-bonded carbon atoms, and thus adopt an sp2 orbital hybridisation. Caffeine is a stimulant drug. It is a xanthine alkaloid compound that acts as a psychoactive stimulant and a mild diuretic (at doses higher than 300 mg- see Relative content: comparison of different sources) in humans. The word comes from the French term for coffee, café. Caffeine is also called guaranine when found in guarana, mateine when found in mate, and theine when found in tea; all of these names are synonyms for the same chemical compound. Caffeine is found in varying quantities in the beans, leaves, and fruit of over 60 plants, where it acts as a natural pesticide that paralyzes and kills certain insects feeding on the plants. It is most commonly consumed by humans in infusions extracted from the beans of the coffee plant and the leaves of the tea bush, as well as from various foods and drinks containing products derived from the kola nut or from cacao. Other sources include yerba mate, guarana berries, and the Yaupon Holly. In humans, caffeine is a central nervous system (CNS) stimulant, having the effect of temporarily warding off drowsiness and restoring alertness. Beverages containing caffeine, such as coffee, tea, soft drinks and energy drinks enjoy great popularity; caffeine is the world's most widely consumed psychoactive substance, but unlike most other psychoactive substances, it is legal and unregulated in nearly all jurisdictions. In North America, 90% of adults consume caffeine daily. The U.S. Food and Drug Administration lists caffeine as a "Multiple Purpose Generally Recognized as Safe Food Substance". However, a 2008 study indicates significant fetal toxicity (see "Caffeine intake during pregnancy"). Caffeine is a plant alkaloid, found in many plant species, where it acts as a natural pesticide, with high caffeine levels being reported in seedlings that are still developing foliages, but are lacking mechanical protection; caffeine paralyzes and kills certain insects feeding upon the plant. High caffeine levels have also been found in the surrounding soil of coffee bean seedlings. It is therefore understood that caffeine has a natural function in both a natural pesticide and as an inhibitor of seed germination of other nearby coffee seedlings thus giving it a better chance of survival. The most commonly used caffeine-containing plants are coffee, tea, and to a lesser extent cocoa. Other, less commonly used, sources of caffeine include the yerba mate and guarana plants, which are sometimes used in the preparation of teas and energy drinks. Two of caffeine's alternative names, mateine and guaranine, are derived from the names of these plants. Some yerba mate enthusiasts assert that mateine is a stereoisomer of caffeine, which would make it a different substance altogether. However, caffeine is an achiral molecule, and therefore has no stereoisomers. Many natural sources of caffeine also contain widely varying mixtures of other xanthine alkaloids, including the cardiac stimulants theophylline and theobromine and other substances such as polyphenols which can form insoluble complexes with caffeine. The world's primary source of caffeine is the coffee bean (the seed of the coffee plant), from which coffee is brewed. Caffeine content in coffee varies widely depending on the type of coffee bean and the method of preparation used; even beans within a given bush can show variations in concentration. In general, one serving of coffee ranges from 40 milligrams, for a single shot (30 milliliters) of arabica-variety espresso, to about 100 milligrams for a cup (120 milliliters) of drip coffee. Generally, dark-roast coffee has less caffeine than lighter roasts because the roasting process reduces the bean's caffeine content. Arabica coffee normally contains less caffeine than the robusta variety. Coffee also contains trace amounts of theophylline, but no theobromine. Tea is another common source of caffeine. Tea usually contains about half as much caffeine per serving as coffee, depending on the strength of the brew. Certain types of tea, such as black and oolong, contain somewhat more caffeine than most other teas. Tea contains small amounts of theobromine and slightly higher levels of theophylline than coffee. Preparation has a significant impact on tea, and color is a very poor indicator of caffeine content. Teas like the pale Japanese green tea gyokuro, for example, contain far more caffeine than much darker teas like lapsang souchong, which has very little. Caffeine is also a common ingredient of soft drinks such as cola, originally prepared from kola nuts. Soft drinks typically contain about 10 to 50 milligrams of caffeine per serving. By contrast, energy drinks such as Red Bull contain as much as 80 milligrams of caffeine per serving. The caffeine in these drinks either originates from the ingredients used or is an additive derived from the product of decaffeination or from chemical synthesis. Guarana, a prime ingredient of energy drinks, contains large amounts of caffeine with small amounts of theobromine and theophylline in a naturally occurring slow-release excipient. Chocolate derived from cocoa contains a small amount of caffeine. The weak stimulant effect of chocolate may be due to a combination of theobromine and theophylline as well as caffeine. Chocolate contains too little of these compounds for a reasonable serving to create effects in humans that are on par with coffee. A typical 28-gram serving of a milk chocolate bar has about as much caffeine as a cup of decaffeinated coffee. In recent years various manufacturers have begun putting caffeine into shower products such as shampoo and soap, claiming that caffeine can be absorbed through the skin. However, the effectiveness of such products has not been proven, and they are likely to have little stimulatory effect on the central nervous system because caffeine is not readily absorbed through the skin. Humans have consumed caffeine since the Stone Age. Early peoples found that chewing the seeds, bark, or leaves of certain plants had the effects of easing fatigue, stimulating awareness, and elevating mood. Only much later was it found that the effect of caffeine was increased by steeping such plants in hot water. Many cultures have legends that attribute the discovery of such plants to people living many thousands of years ago. According to one popular Chinese legend, the Emperor of China Shennong, reputed to have reigned in about 3,000 BCE, accidentally discovered that when some leaves fell into boiling water, a fragrant and restorative drink resulted. Shennong is also mentioned in Lu Yu's Cha Jing, a famous early work on the subject of tea. The history of coffee has been recorded as far back as the ninth century. During that time, coffee beans were available only in their native habitat, Ethiopia. A popular legend traces its discovery to a goatherder named Kaldi, who apparently observed goats that became elated and sleepless at night after browsing on coffee shrubs and, upon trying the berries that the goats had been eating, experienced the same vitality. The earliest literary mention of coffee may be a reference to Bunchum in the works of the 9th century Persian physician al-Razi. In 1587, Malaye Jaziri compiled a work tracing the history and legal controversies of coffee, entitled "Undat al safwa fi hill al-qahwa". In this work, Jaziri recorded that one Sheikh, Jamal-al-Din al-Dhabhani, mufti of Aden, was the first to adopt the use of coffee in 1454, and that in the 15th century the Sufis of Yemen routinely used coffee to stay awake during prayers. Towards the close of the 16th century, the use of coffee was recorded by a European resident in Egypt, and about this time it came into general use in the Near East. The appreciation of coffee as a beverage in Europe, where it was first known as "Arabian wine," dates from the 17th century. During this time "coffee houses" were established, the first being opened in Constantinople and Venice. In Britain, the first coffee houses were opened in London in 1652, at St Michael's Alley, Cornhill. They soon became popular throughout Western Europe, and played a significant role in social relations in the 17th and 18th centuries. The kola nut, like the coffee berry and tea leaf, appears to have ancient origins. It is chewed in many West African cultures, individually or in a social setting, to restore vitality and ease hunger pangs. In 1911, kola became the focus of one of the earliest documented health scares when the US government seized 40 barrels and 20 kegs of Coca-Cola syrup in Chattanooga, Tennessee, alleging that the caffeine in its drink was "injurious to health". On March 13, 1911, the government initiated The United States v. Forty Barrels and Twenty Kegs of Coca-Cola, hoping to force Coca-Cola to remove caffeine from its formula by making claims, such as that the excessive use of Coca-Cola at one girls' school led to "wild nocturnal freaks, violations of college rules and female proprieties, and even immoralities." Although the judge ruled in favor of Coca-Cola, two bills were introduced to the U.S. House of Representatives in 1912 to amend the Pure Food and Drug Act, adding caffeine to the list of "habit-forming" and "deleterious" substances which must be listed on a product's label. The earliest evidence of cocoa use comes from residue found in an ancient Mayan pot dated to 600 BCE. In the New World, chocolate was consumed in a bitter and spicy drink called xocoatl, often seasoned with vanilla, chile pepper, and achiote. Xocoatl was believed to fight fatigue, a belief that is probably attributable to the theobromine and caffeine content. Chocolate was an important luxury good throughout pre-Columbian Mesoamerica, and cocoa beans were often used as currency. Chocolate was introduced to Europe by the Spaniards and became a popular beverage by 1700. They also introduced the cacao tree into the West Indies and the Philippines. It was used in alchemical processes, where it was known as Black Bean. In 1819, the German chemist Friedrich Ferdinand Runge isolated relatively pure caffeine for the first time. According to Runge, he did this at the behest of Johann Wolfgang von Goethe. In 1927, Oudry isolated "theine" from tea, but it was later proved by Mulder and Jobat that theine was the same as caffeine. The structure of caffeine was elucidated near the end of the 19th century by Hermann Emil Fischer, who was also the first to achieve its total synthesis. This was part of the work for which Fischer was awarded the Nobel Prize in 1902. Today, global consumption of caffeine has been estimated at 120,000 tons per annum, making it the world's most popular psychoactive substance. This number equates to one serving of a caffeine beverage for every person, per day. In North America, 90% of adults consume some amount of caffeine daily. Mechanism of action The Caffeine Molecule acts through multiple mechanisms involving both action on receptors and channels on the cell membrane, as well as intracellular action on calcium and cAMP pathways. By virtue of its purine structure it can act on some of the same targets as adenosine related nucleosides and nucleotides, like the cell surface P1 GPCRs for adenosine, as well as the intracellular Ryanodine receptor (RyR) which is the physiological target of cADPR (cyclic ADP-ribose), and cAMP-phosphodiesterase (cAMP-PDE). Although the action is agonistic in some cases, it is antagonistic in others. Physiologically, however, caffeine action is unlikely due to increased RyR opening, as it requires plasma concentration above lethal dosage. The action is most likely through adenosine receptors. Like alcohol, nicotine, and antidepressants, caffeine readily crosses the blood brain barrier. Once in the brain, the principal mode of action of caffeine is as an antagonist of adenosine receptors found in the brain. The caffeine molecule is structurally similar to adenosine, and binds to adenosine receptors on the surface of cells without activating them (an "antagonist" mechanism of action). Therefore, caffeine acts as a competitive inhibitor. The reduction in adenosine activity results in increased activity of the neurotransmitter dopamine, largely accounting for the stimulatory effects of caffeine. Caffeine can also increase levels of epinephrine/adrenaline, possibly via a different mechanism. Acute usage of caffeine also increases levels of serotonin, causing positive changes in mood. Caffeine is also a known competitive inhibitor of the enzyme cAMP-phosphodiesterase (cAMP-PDE), which converts cyclic AMP (cAMP) in cells to its noncyclic form, allowing cAMP to build up in cells. Cyclic AMP participates in activation of Protein Kinase A (PKA) to begin the phosphorylation of specific enzymes used in glucose synthesis. By blocking its removal caffeine intensifies and prolongs the effects of epinephrine and epinephrine-like drugs such as amphetamine, methamphetamine, or methylphenidate. Increased concentrations of cAMP in parietal cells causes an increased activation of protein kinase A (PKA) which in turn increases activation of H+/K+ ATPase, resulting finally in increased gastric acid secretion by the cell. Caffeine (and theophylline) can freely diffuse into cells and causes intracellular calcium release (independent of extracellular calcium) from the calcium stores in the endoplasmic reticulum(ER). This release is only partially blocked by Ryanodine receptor blockade with ryanodine, dantrolene, ruthenium red, and procaine (thus may involve ryanodine receptor and probably some additional calcium channels), but completely abolished after calcium depletion of ER by SERCA inhibitors like Thapsigargin (TG) or cyclopiazonic acid (CPA). The action of caffeine on the ryanodine receptor may depend on both cytosolic and the luminal ER concentrations of Ca2+. At low millimolar concentration of caffeine, the RyR channel open probability (Po) is significantly increased mostly due to a shortening of the lifetime of the closed state. At concentrations >5 mM, caffeine opens RyRs even at picomolar cytosolic Ca2+ and dramatically increases the open time of the channel so that the calcium release is stronger than even an action potential can generate. This mode of action of caffeine is probably due to mimicking the action of the physiologic metabolite of NAD called cADPR (cyclic ADP ribose) which has a similar potentiating action on Ryanodine receptors. Caffeine may also directly inhibit delayed rectifier and A-type K+ currents and activate plasmalemmal Ca2+ influx in certain vertebrate and invertebrate neurons. The metabolites of the caffeine molecule contribute to caffeine's effects. Theobromine is a vasodilator that increases the amount of oxygen and nutrient flow to the brain and muscles. Theophylline, the second of the three primary metabolites, acts as a smooth muscle relaxant that chiefly affects bronchioles and acts as a chronotrope and inotrope that increases heart rate and efficiency. The third metabolic derivative, paraxanthine, is responsible for an increase in the lipolysis process, which releases glycerol and fatty acids into the blood to be used as a source of fuel by the muscles. In large amounts, and especially over extended periods of time, caffeine can lead to a condition known as caffeinism. Caffeinism usually combines caffeine dependency with a wide range of unpleasant physical and mental conditions including nervousness, irritability, anxiety, tremulousness, muscle twitching (hyperreflexia), insomnia, headaches, respiratory alkalosis and heart palpitations. Furthermore, because caffeine increases the production of stomach acid, high usage over time can lead to peptic ulcers, erosive esophagitis, and gastroesophageal reflux disease. However, since both "regular" and decaffeinated coffees have been shown to stimulate the gastric mucosa and increase stomach acid secretion, caffeine is probably not the sole component of coffee responsible. There are four caffeine-induced psychiatric disorders recognized by the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition: caffeine intoxication, caffeine-induced anxiety disorder, caffeine-induced sleep disorder, and caffeine-related disorder not otherwise specified (NOS). An acute overdose of caffeine, usually in excess of 400 milligrams (more than 3–4 cups of brewed coffee), can result in a state of central nervous system overstimulation called caffeine intoxication. Some people seeking caffeine intoxication resort to insufflation (snorting) of caffeine powder, usually finely crushed caffeine tablets. This induces a faster and more intense reaction. The symptoms of caffeine intoxication are not unlike overdoses of other stimulants. It may include restlessness, nervousness, excitement, insomnia, flushing of the face, increased urination, gastrointestinal disturbance, muscle twitching, a rambling flow of thought and speech, irritability, irregular or rapid heart beat, and psychomotor agitation. In cases of much larger overdoses mania, depression, lapses in judgment, disorientation, loss of social inhibition, delusions, hallucinations, psychosis, rhabdomyolysis, and death may occur. In cases of extreme overdose, death can result. The median lethal dose (LD50) of caffeine is 192 milligrams per kilogram in rats. The LD50 of caffeine in humans is dependent on weight and individual sensitivity and estimated to be about 150 to 200 milligrams per kilogram of body mass, roughly 80 to 100 cups of coffee for an average adult taken within a limited timeframe that is dependent on half-life. Though achieving lethal dose with caffeine would be exceptionally difficult with regular coffee, there have been reported deaths from overdosing on caffeine pills, with serious symptoms of overdose requiring hospitalization occurring from as little as 2 grams of caffeine. Death typically occurs due to ventricular fibrillation brought about by effects of caffeine on the cardiovascular system. Treatment of severe caffeine intoxication is generally supportive, providing treatment of the immediate symptoms, but if the patient has very high serum levels of caffeine then peritoneal dialysis, hemodialysis, or hemofiltration may be required. Anxiety and sleep disorders Long-term overuse of caffeine can elicit a number of psychiatric disturbances. Two such disorders recognized by the American Psychiatric Association (APA) are caffeine-induced sleep disorder and caffeine-induced anxiety disorder. In the case of caffeine-induced sleep disorder, an individual regularly ingests high doses of caffeine sufficient to induce a significant disturbance in his or her sleep, sufficiently severe to warrant clinical attention. In some individuals, the large amounts of caffeine can induce anxiety severe enough to necessitate clinical attention. This caffeine-induced anxiety disorder can take many forms, from generalized anxiety to panic attacks, obsessive-compulsive symptoms, or even phobic symptoms. Because this condition can mimic organic mental disorders, such as panic disorder, generalized anxiety disorder, bipolar disorder, or even schizophrenia, a number of medical professionals believe caffeine-intoxicated people are routinely misdiagnosed and unnecessarily medicated when the treatment for caffeine-induced psychosis would simply be to withhold further caffeine. A study in the British Journal of Addiction concluded that caffeinism, although infrequently diagnosed, may afflict as many as one person in ten of the population. Effects on the heart Caffeine increases the levels of cAMP in the heart cells, mimicking the effects of epinephrine. cAMP diffuses through the cell and acts as a "secondary messenger," activating protein kinase A (PKA; cAMP-dependent protein kinase). According to one study, caffeine, in the form of coffee, significantly reduces the risk of heart disease in epidemiological studies. However, the protective effect was found only in participants who were not severely hypertensive (i.e. patients that are not suffering from a very high blood pressure). Furthermore, no significant protective effect was found in participants aged less than 65 years or in cerebrovascular disease mortality for those aged equal or more than 65 years. Caffeine intake during pregnancy The Food Standards Agency has recommended that pregnant women should limit their caffeine intake to less than 300 mg of caffeine a day – the equivalent of four cups of coffee a day. A higher intake may be associated with miscarriage. Dr De-Kun Li of Kaiser Permanente Division of Research, which appears in the American Journal of Obstetrics and Gynecology, concludes that an intake of 200 milligrams or more per day, representing two or more cups, "significantly increases the risk of miscarriage". However, an epidemiologic study published in early January 2008 found no observable increase in risk on miscarriage from caffeine
Improving Your Sleep and Fighting Fatigue Improving your sleep and fighting fatigue refers to taking steps to ensure that you get enough restful and restorative sleep and to overcome feelings of tiredness or low energy. This can be accomplished through lifestyle changes, such as establishing a consistent sleep schedule, creating a bedtime routine, limiting exposure to screens before bedtime, creating a sleep-conducive environment, and exercising regularly. By taking these steps, you can improve the quality of your sleep, reduce stress and anxiety, and wake up feeling refreshed and energized. Good sleep is essential for physical and mental health and well-being and can help you perform better in daily activities and improve your overall quality of life. Habits for Improving Your Sleep and Fighting Fatigue Establish a consistent sleep schedule: Having a consistent sleep schedule means going to bed and waking up at the same time every day, even on weekends. This helps regulate your body’s internal clock and makes it easier to fall asleep and wake up feeling refreshed. When you consistently follow a sleep schedule, your body gets used to the routine and it becomes easier to fall asleep and wake up at the desired times. If you have trouble sticking to a sleep schedule, consider setting a bedtime alarm to remind you to start winding down. Avoid activities that can interfere with sleep, such as screens, caffeine, and intense exercise. Create a bedtime routine: Developing a bedtime routine can be an effective way to signal to your body that it’s time to wind down and prepare for sleep. A bedtime routine can include a variety of activities, such as reading a book, listening to calming music, writing in a journal, taking a warm bath, or stretching. The key is to find activities that help you relax and transition into sleep mode. By consistently performing the same activities before bed, your body begins to Limit exposure to screens before bedtime: Limiting exposure to screens before bedtime is an important step in improving the quality of your sleep. The blue light emitted by electronic devices such as smartphones, tablets, and computers can interfere with the production of the sleep hormone melatonin and make it more difficult to fall asleep. To reduce your exposure to screens before bedtime, consider doing the following: - Avoid screens for at least an hour before bedtime. - If you need to use a device in the evening, use night mode or install a blue light filter to reduce the amount of blue light emitted. - Read a book, listen to music, or engage in another relaxing activity instead of using screens before bed. - Turn off all electronic devices and place them in another room before going to bed to avoid temptation. By reducing your exposure to screens before bed, you can improve the quality of your sleep and wake up feeling refreshed and energized. Create a sleep-conducive environment: Creating a sleep-conducive environment is an important aspect of improving the quality of your sleep. Your bedroom should be quiet, dark, and cool to promote a relaxing and restful sleep environment. Here are some tips for creating a sleep-conducive environment: - Keep the bedroom quiet by using earplugs or a white noise machine if necessary. - Make sure the room is dark by using heavy curtains or an eye mask. - Keep the room cool by using a fan or adjusting the temperature of your air conditioning or heating system. - Make your bed comfortable with fresh, high-quality bedding, and invest in a supportive mattress and pillows that are right for you. - Minimize distractions by removing any items from the room that may interfere with sleep, such as televisions or electronics. By creating a sleep-conducive environment, you can improve the quality of your sleep and wake up feeling refreshed and energized. Exercise is an essential component of a healthy sleep routine. Regular exercise can help improve the quality of your sleep by reducing stress and promoting relaxation. However, it’s important to avoid intense exercise too close to bedtime, as it can interfere with sleep by elevating your heart rate and making it harder to fall asleep. To maximize the sleep-promoting benefits of exercise, consider doing the following: - Aim for at least 30 minutes of moderate physical activity most days of the week. - Exercise in the morning or early afternoon to avoid elevating your heart rate too close to bedtime. - Find an exercise routine that you enjoy and that helps you relax, such as yoga, stretching, or walking. - Avoid vigorous or intense exercise close to bedtime. By exercising regularly, you can help improve the quality of your sleep, reduce stress and anxiety, and wake up feeling more refreshed and energized.
If you adhere to the basic principle that in a democracy the citizens pick their leaders, what happened in the waning moments of the Supreme Court’s last term should alarm you as it has alarmed democracy advocates throughout the nation. Lost in the wake of a half-dozen unhinged majority opinions issued in the last week of the term, the Court decided to hear a redistricting case out of North Carolina, with direct impact in Pennsylvania as well, raising a fringe constitutional argument known as the “independent state legislature” theory. The decision to hear argument in the case of Moore v. Harper signals that the Court will seriously consider, and is very likely to adopt, a theory that will render meaningless your vote if you happen to live in a state like Pennsylvania, where Republicans control the state legislature. If it was not already clear, the decision to hear argument in this case solidifies that the Supreme Court, more specifically, the six radical right-wing Justices packed into the Court, are the single greatest threat facing our democracy. We now have only months left to preempt this looming threat. The Independent State Legislature Theory Is Incompatible with Democracy The independent state legislature theory is incompatible with the basic principle of democracy that citizens pick their leaders. It is a chilling and entirely made-up doctrine, hatched from a concurring opinion by then Chief Justice William Rehnquist in the infamous Bush v. Gore case, without any rooting in the Constitution or in US history. The theory interprets the word “legislature” in the Constitution to mean that state legislatures – and only state legislatures – can regulate elections. It rests on the fact that Article I’s Election Clause states that the “Times, Places and Manner” of congressional elections, “shall be prescribed in each State by the Legislature thereof,” subject to an override by Congress. Similarly, Article II gives “the Legislature” of each state the power to set the “Manner” of choosing presidential electors. This exclusionary interpretation of the word “legislature” is a stark departure from the standard interpretation, where “legislature” means the state’s entire lawmaking apparatus. The Republicans pushing for this extreme interpretation want to exclude the governor, the state courts, and citizen-led ballot measures from having any role in federal elections. By excluding all these other parts of the state government, the theory would grant all the power to set election rules and congressional maps to the state legislature – unchecked by the governor’s veto, the state courts, the people themselves, or even the state constitution. Every State Controlled by Republicans – Including Pennsylvania – is a Target Today, Republicans have control over 30 state legislatures, including in Pennsylvania, where adoption of the independent state legislature theory will eradicate, overnight, over 30 years of effort to secure fair congressional districts. Pennsylvania’s Constitution, as interpreted by our State Supreme Court, bans partisan gerrymandering through its free election clause. Recall Pennsylvania’s partisan congressional maps in place after 2010 that effectively cemented a 13-5 Republican advantage in an otherwise evenly divided state. This unconstitutional map demonstrates how an unchecked state legislature could use sophisticated computerized models to draw maps with what one court called “surgical precision,” effectively letting politicians choose their voters instead of the other way around. In 2018, the Pennsylvania State Supreme Court ruled that these gerrymandered maps violated the Pennsylvania Constitution and ordered new congressional district maps that achieved a fair partisan balance. The maps since 2018, and those in place now, thanks to critical checks from our State Supreme Court and the Governor, meet all the hallmarks of fairness, insofar that they are non-partisan, compact, minimize county and municipal splits, and preserve communities of interest. If the GOP-controlled legislature had not been checked, Pennsylvania’s congressional maps could likely be drawn in a way to limit Democrats to three or possibly fewer congressional representatives. Pennsylvania and North Carolina have been on remarkably similar trajectories when it comes to the battle over congressional maps. Republican politicians in Harrisburg, like those in Raleigh, North Carolina, livid at having to play fair, and fueled by an electorate that has lost faith in elections thanks to the Big Lie proponents like seditionist and GOP gubernatorial nominee Doug Mastriano, have explored different strategies to gain unchecked power. In addition to pushing proposed constitutional amendments to avoid the Governor’s veto or adverse rulings from the state supreme courts, the GOP in both states have also filed lawsuit after lawsuit, testing every conceivable legal argument to challenge the fair congressional maps. The independent state legislature theory was widely regarded as among the most fringe and frankly craziest of the arguments pushed by the Republicans. Until the final seconds of the Supreme Court’s last term, the proponents of fair maps in Pennsylvania and North Carolina had been successful in fighting back the GOP onslaught, prevailing in nearly every round to keep these maps in place. When the Republican-controlled North Carolina legislature drew congressional maps, they were challenged and declared to be partisan gerrymanders in violation of the North Carolina constitution. Like in Pennsylvania, North Carolina drew fair maps for the upcoming 2022 election at the insistence of their Supreme Court. The North Carolina and Pennsylvania GOP then turned to the US Supreme Court, and citing the fringe independent state legislature theory, they first tried unsuccessfully to persuade the Supreme Court to use its “shadow docket” to block the fair maps. Although the Supreme Court denied that request, to the shock and dismay of many pro-democracy advocates, the Supreme Court agreed to hear the case in full next term. Disturbingly, in litigation surrounding the 2020 elections last year, conservative Justices Samuel Alito, Neil Gorsuch, Clarence Thomas, and Brett Kavanaugh signaled a strong interest in the independent state legislature theory, with the first three all but embracing the theory in their dissent. The US Supreme Court’s Most Underrated Power The mere fact that the US Supreme Court decided to hear this case is profoundly disturbing. The Supreme Court, unlike the federal courts below it, has the power to pick and choose the cases it hears. Each year, the Court receives between 7,000-8,000 petitions for a writ of certiorari and the Court grants and hears argument in about 80 cases. The fact that one of those 80 cases for its next term will consider the viability of the independent state legislature theory is, in and of itself, a dangerous sign for democracy. This is particularly true given the current makeup of the Court, where a 6-3 right wing supermajority has shown little or no deference to precedent, and has been willing to adopt far-fetched legal theories in service of political or ideological goals. In the last week of its final term, from June 23 to June 30, the Court issued a devastating series of decisions: – In Dobbs v. Jackson Women’s Health Organization, the Court overruled Roe v. Wade and permitted the near total prohibition of abortions in many states, including in some places for children who are raped and become pregnant as a result of that rape. – In New York State Rifle & Pistol Association, Inc. v. Bruen, the Court struck down efforts to license concealed weapons, including by states with densely populated cities, making it easier to carry guns in and around crowded locations. – In West Virginia v. the Environmental Protection Agency, the Court used a fringe theory known as the “major questions doctrine” to severely restrict the EPA’s authority to regulate greenhouse gases. Adoption of the major questions doctrine will make it significantly more difficult for any federal agency to address new or developing problems, including looming catastrophes resulting from climate change. Like the independent state legislature theory, the major questions doctrine was concocted in a right-wing think tank and has no basis in history or in the Constitution. – In Oklahoma v. Castro-Huerta, the Court dramatically undermined native American tribal sovereignty. – In Vega v. Tekoh, the Court again undermined the rights established by Miranda v. Arizona, and expanded immunity for police officers who violate those rights. – In Kennedy v. Bremerton School District, the Court tore down the wall between church and state in public schools, permitting a coach to lead a public prayer group with his students in the middle of the 50-yard-line. – Finally, in Biden v. Texas, the Court was just one vote shy of forcing the Biden Administration to retain Trump’s “remain in Mexico” policy in dealing with migrants. The Supreme Court came dangerously close to substituting the will of federal circuit judges for the administration when it comes to setting foreign policy and dealing with foreign nations. Moreover, the conservative justices on this Court have repeatedly shown particular antipathy to voting rights. In its 2013 decision in Shelby County v. Holder, the Court gutted the crown jewel of the 1965 Voting Rights Act. In a series of cases since then, the Court has prevented federal courts from intervening to prevent partisan gerrymandering of congressional districts. In 2020, in a 5-4 ruling in Republican National Committee v. Democratic National Committee, the Supreme Court forced the citizens of Wisconsin to choose between voting and protecting their health. After many polling places were closed, including 157 in Milwaukee (leaving only five open in the entire city), the Court held that citizens could not challenge voter suppression on the eve of an election. Last year, the Court considered a series of emergency motions in cases challenging various forms of voter suppression, and sided with the states that engaged in the suppression every time. Even this year, again on its shadow docket, the Court barred federal courts from requiring states to correct unconstitutional congressional maps before the 2022 midterm elections. Most outrageously, in Merrill v. Milligan, the Court stayed a decision of a lower court that sought to impose fair maps after the Alabama legislature had adopted maps that deliberately diluted the power of black and Democratic voters, in clear violation of the Voting Rights Act. Notably, the Court cited another fringe doctrine the so-called “Purcell principle” to keep the unfair maps in place, claiming that electoral changes that occurred too close to an election (nine months away) will confuse voters. For this Supreme Court, speculation about so-called “voter confusion” is a greater sin than actual voter disenfranchisement. As further evidence that these fringe theories are being enlisted by the Court to advance partisan political objectives, the Purcell principle was used to disenfranchise Black and Democratic Alabama voters under the theory that their dispute was presented too late for the federal courts to intervene. Too late, despite the fact that the unconstitutional maps were disputed within one day of those maps being adopted by Alabama. If voters cannot bring their claims within a day following the approval of the maps, then it is impossible to give any relief to disenfranchised voters. It is evident that we have a supermajority on the Supreme Court that is: (A) hostile to voting rights; (B) unmoored from precedent; and (C) willing to use fringe legal theories to advance partisan political ends. What Could & Will Go Wrong The impact of the Supreme Court’s adoption of the independent state legislature theory will be immediate and devastating. Next June, when the Court issues its decision in Moore v. Harper, it will relegate Democrats, minorities, and anyone else who cares deeply about preserving and strengthening democracy, to forever minority status – even in states where we are the majority. In its most unhinged iteration, the theory is the very same one that insurrectionist lawyers John Eastman and Jeffrey Clark were pushing to overturn the results of the 2020 election. State legislatures fully empowered by this theory with ultimate authority over how elections are run, can simply decide to throw away the electoral outcomes and hand in ballots of fake electors. In an equally destructive interpretation of the theory, the Pennsylvania and North Carolina Republicans are arguing that their grotesquely gerrymandered versions of the congressional maps must be adopted by the respective states. In other words, the Republicans in the state legislature in Harrisburg are the only ones who can establish the congressional maps and they have the sole and ultimate authority to draw them any way they choose to do so. This would remove the critical checks in a functioning democracy provided by state and federal courts, governors, and secretaries of state. The Republicans in Harrisburg will use algorithms to supercharge partisan gerrymandering in Pennsylvania, creating convoluted district boundaries that will ensure that their party remains in control of congress. The Democratic leaning suburbs outside of Philadelphia will be severely gerrymandered to dilute the power of these voters. The GOP legislature’s authority will extend beyond maps to every aspect of elections. The state legislature will decide everything including, whether drop boxes are available, if we can still vote by mail, and if ID is required. It can determine the number of polls in cities, and deliberately make voting more difficult for minorities. It can restrict early voting, and can implement policies where voters are purged from the rolls every year – compelling everyone who wants to vote to re-register. The legislature can also make it easier to harass and intimidate voters by permitting out of county partisans to brandish weapons under the guise of “poll watching”. Here in Pennsylvania, voters would be left to the whims of characters like Mastriano, who has not met a form of voter suppression he has not endorsed. By unleashing unchecked control over elections, the Supreme Court is getting ready to turn the keys for running the entire country over to the most extreme faction of the GOP in history. The Supreme Court will obviate the need for a violent, fascist right-wing coup by giving legitimacy to the anti-democratic theories advanced by many of the same lawyers who advised Trump on how to overthrow our democracy. We essentially have less than one year to do everything we can to avert the coming crisis. Our democracy is in a “break glass in case of emergency” moment. This article was originally published July 21, 2022.
Q: I have heard more people are finding out they have sleep apnea. Is obesity the only cause or are there other reasons people develop this problem? A: Sleep apnea is more common in males and post-menopausal women and the risk increases with age. As we get older, even very thin people are at increased risk for sleep apnea. Smoking also can contribute. Some people are more predisposed because of their anatomy. For example, those with large tongues and soft palates, smaller jaws and large tonsils might be at greater risk. There also are many illnesses associated with sleep apnea including high-blood pressure, stroke, diabetes and heart disease. But whether these cause sleep apnea is unclear. Q: My daughter has a below-average normal body temperature, usually around 97 degrees. Her doctor has said it’s not a problem, but we have wondered how many people have a similar situation and why it happens. A: There is a wide range of body temperatures that are considered normal, and temperature varies from place to place on the body. Body temperature can fluctuate by as much as 0.9 degrees Fahrenheit in normal, healthy individuals. It is lowest in the early morning and rises throughout the day, depending on your activity level. Sources: Dr. Aneesa Das, Ohio State University’s Wexner Medical Center; Dr. Kevin Frey, OhioHealth’s Millhon Clinic
Language Learning & Technology Vol. 5, No. 2, May 2001, pp. 8-12 Testing Tools and Technologies Paginated PDF version Virginia Commonwealth University Computers have been used in language assessment since at least the 1960s. The PLATO project at the University of Illinois pioneered the use of networked computers for language practice and testing. However, the use of computers in language testing did not become widespread and generally available until the advent of the personal computer in the late seventies and early eighties. Among the better-known software packages from the early (DOS) days is Calis from Duke University (still available as an unsupported product). It was designed for active drill and practice of grammar and vocabulary, rather than formal assessment. This was the case as well for Dasher, a widely-used Mac-based program from the University of Iowa. Both programs provided for varied feedback options and recognition/display of partially correct answers. In addition to dedicated language software, generic authoring tools were often used to develop language drill and assessment programs. The best-known of these are HyperCard (from Apple) and Toolbook (from Asymetrix, now click2learn) With both, multimedia could be integrated into the tests or exercises, allowing for more options, including assessing listening comprehension. The arrival of CD-ROM facilitated greatly the use of multimedia in language programs, by providing the necessary storage capacity. There are today successors to these stand-alone authoring programs, such as WinCalis, the Windows version of Calis. One of the attractive features of WinCalis is its support for Unicode (ISO 10646), which allows representation of a great variety of languages and alphabets simultaneously in an application. MaxAuthor, from the University of Arizona, is another Windows-based authoring program for language testing and practice. It also supports a variety of languages, and lessons can be made Web-accessible. Although some testing applications have taken advantage of the availability of local area networks (particularly for storing scores centrally), the arrival of the World Wide Web in 1993 with its rich and powerful network environment provided a more attractive -- and ever more pervasive -- networking option. The Web offers the advantages of centralized delivery (and authentication) as well as server-based score storage and retrieval. Initially, the user experience with Web-based tests was not much different from pen and paper versions, with relatively little interactivity or user feedback. Some Web tests continue to use similar approaches with test scoring provided by e-mail or separate Web pages showing the answers (for the test taker to compare with his/her answers). The advantage of using the Web, however, is in the interactivity it enables. This is generally done through the use of Web form pages which are processed by CGI ("Common Gateway Interface") scripts, usually written in Perl. Tests delivered through CGI typically are in machine-correctable formats such as multiple choice or true-false, using checkboxes, radio buttons, or pull-down menus. Usually users must complete the entire test before submitting it and receiving feedback. In CGI-based formats, feedback options are limited and there is rarely recognition of partially correct answers. One area in which there has recently been considerable activity is the development of Web-based language placement exams. Among them are those from Macalaster, BYU, and Northwestern. All use a server-based CGI delivery for security reasons. Of interest is the use of a computer adaptive testing approach in some on-line placement exams, such as the WebCAPE Tests from BYU. The placement exams under development at Ohio State University use an adaptive testing mechanism combined with authentic language materials shown in their original context. In addition to these authoring tools, quiz/exercise templates are also available for language teachers, such as those from Marmo Soemarmo. His site provides examples of exercises in a great variety of formats: true-false, multiple choice, matching, feature or category identification, short answer, cloze, sentence generation, hypertext, memory, spelling. By downloading the source code for the examples provided, new content can be added by following the comments included in the code. Test and exercise templates for language learning are also available from Douglas Mills and George Mitrevski. Computerized testing will inevitably increase in volume and scope. This is happening in all areas, including in major national and international standardized tests. This growth is not without controversy, as evinced in the reaction to the ETS announcement of the use of computerized testing in Africa. As schools demand more frequent standardized testing of students, more of that testing will migrate to computer formats. Many states provide practice tests for students on the Web, such as those from Edutest for the Virginia "Standards of Learning" exams. On the server end, JSP ("Java Server Pages") is becoming an attractive alternative to CGI. JavaServer Pages technology uses XML-like tags and scriptlets written in the Java programming language, but incorporated into the HTML code, to provide an equivalent to CGI. Java "servlets" residing on the Web server are able to interpret this code and execute the processing of the Web forms. The idea is to separate the page display and formatting from the programming logic, so that interactive pages can be created and maintained by conventional HTML/XML tools. While the approach is similar to that used by Microsoft's ASP ("Active Server Pages"), the JSP approach provides more programming and scripting flexibility as well as multi-platform support. In any case, there is likely to be a database back-end that keeps test information, including questions and answers, as well as scores. There are well-established methods for connecting databases with Web servers, such as ODBC ("Open Database Connectivity"). A popular method to interact with databases is the use of middleware or application development software such as ColdFusion (from Allaire, recently merged with Macromedia), Tango, or Lasso. Web-Based Testing Resources Organizations and Institutions Sample On-line Practice Tests Language Placement Tests On-line Test Makers, Tools, and Templates All links validated on April 16, 2001. | Home | About LLT | Subscribe | Information for Contributors | Masthead | Archives
Class 9 Science needs a detailed study as students are introduced to various new topics that would act as the base for their future studies. CBSE revamps the syllabus and pattern from time to time so that students get the latest knowledge. With the changed syllabus and pattern, it is probable that students may feel stressed. To reduce their stress and help them in exam preparation, we have provided the CBSE Sample Papers for Class 9 students. CBSE Sample Papers for Class 9 Science CBSE Sample papers for Class 9 Science are the best way to practise concepts and prepare for annual exams. Students can get acquainted with the real question paper pattern along with the marking scheme. It will help them in analysing the exam preparation level. Students must try to solve all the questions whether it is of 1 mark, 2 marks, 3 or 5 marks. Go through all the important diagrams and practise them, as these can help students add extra marks. These sample papers adhere to the CBSE Class 9 Syllabus and cover all the important chapters of NCERT Books from an exam perspective. CBSE Class 9 Sample Papers 2021 - CBSE Class 9 Science Sample Paper 2021 PDF – Set 1 - CBSE Class 9 Science Sample Paper 2021 PDF – Set 2 CBSE Class 9 Sample Papers- Based on Old Pattern We have also provided the unsolved Class 9 Sample Paper for students’ practice. Download and practise them to be thorough with the question paper pattern and difficulty level of the exam. CBSE Sample Paper for Class 9 Science without Solution |CBSE Sample Paper for Class 9 Science Set 6| |CBSE Sample Paper for Class 9 Science Set 7| |CBSE Sample Paper for Class 9 Science Set 8| |CBSE Sample Paper for Class 9 Science Set 9| |CBSE Sample Paper for Class 9 Science Set 10| CBSE Class 9 Science SA1 and SA2 Sample Papers Students can also access the CBSE Class 9 Science sample papers for SA1 and SA2 exams from the table below. |CBSE Class 9 Science Sample Papers SA1| |CBSE Class 9 Science Sample Papers SA2| Features of CBSE Class 9 Science Sample Papers The CBSE Sample Papers provided here have the following features: - These papers are created by the subject experts exclusively for CBSE Class 9 students. - The papers are created as per the latest exam pattern and syllabus. - Sample papers cover important topics from an exam perspective. - Some difficult questions are also included in the papers so that students get good practice for the exam. We hope students have found this information on “CBSE Class 9 Sample Papers” useful for their exam preparation. Solving these papers will boost their exam preparation. Keep learning and stay tuned for further updates on CBSE and other competitive exams. Download the BYJU’S App and subscribe to the YouTube channel to access interactive Maths and Science videos.
The word enterprise really refers to any entity or individual engaged in enterprise. Businesses could also be both for-profit or non-revenue establishments. A for-revenue business is one that makes a revenue by meeting a particular enterprise need and provides the service or product that meets or exceeds the wants of its clients. Non-profit companies normally seek to alleviate some of the social or governmental issues that their community is faced with. There are many different industries and enterprise enterprises engaged in offering items or companies. These embody, but usually are not limited to, clothing shops, grocery stores, restaurants, motels, lodges, bars, warehouses, shops, and so forth. Most individuals interact in a number of sorts of enterprise operations. Almost everybody has engaged in some sort of business transactions no less than as soon as in their lives. As most individuals have engaged in some type of business transactions, it is likely that they’ve some data about how companies function. They are also prone to have some information about different businesses that they might help with improving business operations. The next paragraphs will talk about varied business practices that may be carried out to enhance enterprise operations. When a business exercise occurs, it involves the sale of services or products to customers at a revenue. Business house owners are thought of enterprise house owners when they actually make earnings from the operation of their business activity. Business owners do not all the time earn cash directly from the sale of products or services. Some business homeowners receive a portion of the earnings from their companies. Purchasing goods and companies from other businesses is a crucial part of every enterprise exercise. It’s common observe in most enterprise actions to obtain totally different items and services from other businesses. These goods and services are then bought to prospects at a revenue. Certainly one of the most effective methods to earn earnings from gross sales of goods and services is to obtain them from other companies which might be prepared to promote their goods and services at lower prices. Selling a service quite than a product is another widespread follow for many small businesses. In this sort of sale, a business owner agrees to sell his or her time as a substitute of the services or products. This observe is sometimes called “time promoting”. An excellent example of this would be somebody hiring a contractor to carry out some development activities on one’s home. The idea of “manufacturing” and “sale” can sometimes be confusing on this planet of business operations. The ideas usually are used interchangeably even by professionals in several industries. A production process refers to the complete sequence of actions that happen all through the manufacturing of a product or service. For instance, one sort of production process would be the manufacturing of uncooked materials equivalent to steel and oil in order to create a product corresponding to steel buildings. An economic activity, then again, refers to the overall income that results from the sale of a product or service. All the enterprise activities in the preceding instance would not make up a sale if the worth of the finished goods had been equal to the price of manufacturing. A agency can generate revenue from all its actions in a single explicit fiscal 12 months if its gross value exceeds its belongings. If the worth of the firm’s assets exceeded its liabilities, then the agency has an asset base, while its liabilities stay the same as its assets. The financial statement of a business also contains the difference between the value of an asset and its legal responsibility, known as equity. Every enterprise should have managers who are chargeable for the day-to-day operations of the corporate. These managers are often referred to as managers, administrators, or homeowners. They handle individuals such because the manufacturing employees, gross sales staff, and warehouse workers. There are key parts of administration that each supervisor should grasp. These key parts embody planning, organizing, main, and controlling. The planning stage of any time period marketing strategy entails the creation of a strategy for the operations of the business. This strategy should handle such points as the nature of the products or services to be offered, advertising methods, technicalities, analysis and growth prices, and enterprise plans. Market research can play an essential function in planning. This part of the operation might be performed through surveys of present customers, market developments, and goal markets. Business plans outline the strategies by which the enterprise will achieve new prospects, and it consists of info concerning the administration system, capital necessities, administration construction, working procedures, and succession plans. Market analysis is necessary to a big extent. It entails accumulating information from customers and evaluating the quality of the services or products supplied. Another essential side of market research is analyzing the competitors within the enterprise trade. The opposite two necessary phases of enterprise plans embrace managing operations, getting ready financial statements, and figuring out the location and opening of the enterprise. These key facets of the operations and business plans are nearly the same in a standard marketing strategy.
A good classroom activity is one that helps students focus what they’ve learned, sharpen their thinking and bring imaginative approaches to problem-solving. Recently, two groups of students—graduate and undergraduate—at the University of Delaware pulled together their knowledge of hazards and disasters for just such an activity. The result was a unique “cross-over event,” a disaster management exercise centered on the scenario of a major hurricane striking the UD campus. Tricia Wachtendorf, professor of sociology and criminal justice, and James Kendra, professor of public policy and administration, teamed up to offer this opportunity for their students. The professors are also co-directors of UD’s Disaster Research Center. Undergraduates in Wachtendorf’s “Disaster and Society” course spent this spring semester discussing such topics as the social foundations of disasters, warnings and evacuations, and the social capital and influx of volunteers and donations in the post-disaster environment. Meanwhile, graduate students in Kendra’s “Issues in Disaster Response” course focused in-depth on typical challenges that arise in disaster. In an innovative variation, the graduate students designed a table-top exercise and guided the undergraduates as they tackled fast-paced disaster challenges.
Luthfi Nur Fajrina, Luthfi (2016) ANALISIS STRUKTUR DAN MAKNA ADVERBIA TSUNE NI SERTA SHIJUU DALAM KALIMAT BAHASA JEPANG 日本語の文書に副詞「常に」と「始終」の構造と意味. Undergraduate thesis, Universitas Diponegoro. ABSTRACT Fajrina, Luthfi Nur. 2016. “Analysis of the Structure and Meaning ofAdverb’s Tsune ni and Shijuu in Japanese Sentences”. Thesis, Department of Japanese Studies Faculty of Humanities, Diponegoro University. AdvisorS.I Trahutami, S.S, M.Hum. The main matter of this research is : 1. What is the behavior of the structure and meaning of the adverb’s tsune ni and shijuu in Japanese sentences? 2. What are the differences and similarities of the adverb’s tsune ni and shijuu ? The purpose of this research is : 1. To know the structure and meaning of the adverb’s tsune ni and shijuu in Japanese sentence. 2. To know the differences and similarities of the adverb’s tsune ni and shijuu. The data was collected from the Asahi Shimbun Digital articles, the Yomiuri Shimbun Online articles, and the book of Watashi no Sutairu wo Sagashite. There is 38 data tokens in total. The author used 3 methods in this research. The data’s collecting was done with the simak method, used sadap technique and catat technique. Furthermore, the method of data analysis was agih method and usedthe ganti technique. Meanwhile, an informal method was used for presenting the results. The data was analyzed by using some compilation theory from Kyousuke, Chino, Hayashi, etc. The results of this research showed that the adverb’s tsune ni and shijuu have a similar meaning which to show some repeating the same thing. From a structural perspective, the adverb’s tsune ni and shijuu can be followed by verbs, nouns, adverbs and adjectives. But, the adverb’s tsune ni and shijuu aren’t always modifying the word that follows them. The adverb tsune ni’s frequency is lower than shijuu, less formal than shijuu, can be use to show a repeated event that occurred in the certain time or not, and can be use to communicate with children. Meanwhile, the adverb shijuu’s frequency is higher than tsune ni, more formal than tsune ni, it’s use is more to show a repeated event that occurred in the certain time and can’t be use to communicate with children. Keywords : structure, meaning, synonym, adverbs, fukushi, tsune ni, shijuu |Item Type:||Thesis (Undergraduate)| |Subjects:||P Language and Literature > P Philology. Linguistics > P1-1091 Philology. Linguistics > P101-410 Language. Linguistic theory. Comparative grammar > P325-325.5 Semantics| |Divisions:||Faculty of Humanities > Department of Japanese| |Deposited On:||19 Dec 2016 12:25| |Last Modified:||19 Dec 2016 12:25| Repository Staff Only: item control page
The Archaeological Museum The wonderful Archaeological Museum of Piraeus, with the courtyard where the only ancient theater has been preserved, is on Trikoupi street. The first Archaeological Museum of Piraeus was founded in 1935 near the ruins of the Hellenistic theater of Zea. Today it is used as warehouse for sculptures, while its successor, the new museum, was opened on the same site in 1981. The Archaeological Museum of Piraeus is one of the lesser known Greek museums. Over 10 halls, the city comes alive in the eyes of the visitor, as the exhibits, from archaic to Roman years, silently but eloquently tell the story of the years of prosperity and decline of the port. Among them stand the imposing tomb of Kallithea, the famous bronze statues and a bronze shield of the fourth century BC (the supernatural dimension of Athena’s helmet decorated with griffins and owls), the statue of Artemis and the oldest known cast statue, a strict archaic kouros known as Apollo of Piraeus (6th century BC) etc. From the time of the naval domination, when light triremes were equipped with bronze oars, we see a bronze piston, the great glory of the museum, which is probably the oldest surviving (4th century BC) in the world. It is a unique exhibit definitely worthy of the location where the masterpieces of the history of the largest port in ancient Greece are placed. Piraeus lost its glamour in the late 5th century BC. The period of Roman domination seals the end of the city. The decorative Athenian paintings on display were meant to decorate a Roman building, but sank with the boat that transported them and were found in 1933 in a shipwreck at the bottom of the harbor of Piraeus.
What are the effects from exposure to pests in a museum’s collection? Can cause significant damage to a museum’s collections particularly when they are stored and infrequently accessed. How can museums safely prevent pests from entering and destroying an institution’s culturally valuable artifacts? This page illustrates the types of pests likely to invade a museum environment and how Integrated Pest Management (IPM) can safely eliminate an infestation without the use of harmful pesticides. Pest IdentificationThe chart below outlines various types of pests that may be attracted to collection materials and the types of artifacts effected: |Type of pest||Artifact effected| |black carpet beetle (larva and adult)|| larval stage causes damage to fabric, fur, feathers, anything made of animal fibers Related species: varied carpet beetle, common carpet beetle and furniture carpet beetle |clothes moth (larva and adult)|| larval stage causes damage to woolen clothes and objects such as feather hats, dolls and toys, bristle brushes, weavings, and wall hangings Related species: webbing clothes moth and casemaking clothes moth |powderpost beetle||wooden artifacts, frames, furniture, tool handles, gun stocks, books, toys, bamboo, flooring, structural timbers| |drywood termite||wooden items of all kinds| |cigarette beetle||books, dried plants (herbarium)and seeds| |drugstore beetle||books and manuscripts; also beans and spices| |molds-fungi||wood, textiles, books, paper products, fabrics, insect specimens| dried plants, herbaria, insect collections, manuscripts, cardboard boxes, furniture stuffed with flax, hemp, jute or moss Also referred to as booklice (plural) paper, paper products and textiles (cotton or artificial silk), glue backing on wallpaper Related species: firebrats The Integrated Pest Management ApproachImplementing a variey of non-toxic approaches to create an inhospitable environment for pests. Check the collections frequently . . .Making regular scheduled inspections of all collections, whether on display or in storage, is a necessary first step in detecting pest infestation. Presence of feeding debris or frass (wood that has passed through the digestive system of a beetle) is an indication of infestation. Appearance of exit or feeding holes in wood items, silken cocoon cases, hair falling from fur or pelts, droppings, or moth or beetle pupae are also signs of infestation. The use of small sticky traps placed in areas throughout a facility and inside storage units can aid in tracking and identifying pests. At the Museum, traps are checked biweekly and any pests found are identified and recorded. Reports are generated monthly to track areas and the types of insects found. Problems are immediately addressed by isolating and treating objects and eliminating the insect source. Housekeeping and vigilance are important . . .The presence of dust, dirt and food are attractive to pests. Keeping the storage and exhibit areas clean and food-free is an effective approach to preventing a pest infestation. Implementation of policies and procedures for incoming acquisitions, such as careful examination and isolation for 48 to 72 hours, is also recommended. This practice enables the Museum staff to monitor new artifacts and ensure that they are free from infestation. Inspecting the facility for cracks around windows, doors, floors and foundation is also important as these areas may be easy access for pest infiltration. Environmental controls . . .Controlling temperature and humidity in a museum environment is not only a concern for the collections, but for pest prevention as well. Low humidity and to a lesser extent low temperatures reduces the chance of pest infestation and slows down the growth of existing pest populations. Checking the building envelope . . .Securing the interior and exterior areas of the building can prevent pests from entering the building and ultimately the collections. Sealing infiltration areas (cracks and gaps in foundation, windows and doors), correction of drainage problems and installing sweeping gaskets on exterior doors are good measures in preventing pests from coming inside. What to do if signs of infestation occur . . .The Integrated Pest Management (IPM) approach advocates preventive activities that avoid the use of chemical treatment. Two methods for insect eradication using the IPM method are given here. Anoxic treatment (the elimination of oxygen from a microenvironment) involves the use of oxygen impermeable bags with some form of oxygen scavenger inside (depriving the pest of oxygen). This treatment is generally utilized for small groups or single objects that are infested. Freezing in a large commercial freezer that can reach temperatures of 0° F or lower is an effective way to treat collections. Materials such as herbarium specimens, books, and textiles can be treated for infestation in this manner. Some objects, such as those made of wood, lacquer and bone, may be adversely effected by freezing.
Shared Spaces Campaign What's the problem? Shared space is a new design concept for town centre and high street developments, often delivered by means of a shared surface street design. In most cases the design involves removing the kerb that has traditionally separated areas for vehicles and pedestrians creating a shared surface street. The shared space concept aims to create attractive shared 'social' areas and to reduce the dominance of vehicles to make streets more 'people-friendly'. In shared surface street design of the road and its surroundings are altered to cause changes in the behaviour of drivers, encouraging them to be extra cautious as they negotiate the new road layout. Pedestrians, motorists and cyclists need to make eye contact to establish who has priority. However this puts blind and partially sighted people at a serious disadvantage. Blind and partially sighted people, particularly guide dog owners and long cane users are trained to use the kerb as a key navigation cue in the street environment. Its removal, without a proven effective, alternative feature, exposes blind and partially sighted people to greater risk, undermines their confidence, and so creates a barrier to their independent mobility. The kerb is also vital for children's safety when using roads. From an early age children are taught as part of the Green Cross Code to Stop, Look, and Listen at kerbs. If these kerbs are removed, how will children know where to stop? Guide Dogs supports the aim of creating attractive 'people-friendly' street environments but opposes the use of shared surface streets to achieve this. For background information on our previous campaigning work on the issue of shared surface streets, please read a copy of our Campaign report. Shared surface streets are not just an issue for blind and partially sighted people. Our concerns have been well-supported by a wide range of disability organisations who have concerns about the dangers of these street designs for other vulnerable road users. Shared Spaces Campaign What do we want to change? A guide dog owner on a shared surface street We want to stop the introduction of shared surface street schemes across the UK. . The Government across the UK to take leadership, and ensure guidance is issued to local authorities on how streets should be designed without recourse to shared surface streets. . Local authorities must stop commissioning shared surface streets as such schemes discriminate against blind and partially sighted people. . Designers and planners to challenge themselves to create attractive people-friendly streetscapes that have inclusion at the heart of the design. Effective and meaningful consultation with blind and partially sighted people, and people with other disabilities, is also vital during any urban street design planning. It is essential that the Disability Discrimination Act and current Government policy and guidance on inclusive design, social inclusion and meaningful community involvement are taken fully into account during the designing development and delivery of any new streets. It is imperative that local authorities test proposed new designs before they are implemented and consult local groups and disability organisations at all stages in the process of developing our streets. This does not mean that voluntary groups, or indeed disabled people themselves, should be expected to provide solutions. It is the responsibility of designers and planners to meet the needs of disabled people in the built environment by designing and implementing safe accessible streets for all users. Shared Spaces - Research Reports Design trials research with University College London (UCL) Guide Dogs looked at how to delineate a safe space in a shared surface street if a traditional kerb is not used. Working with UCL (University College London). The results found that none of the delineators tested were effective enough for us to recommend using between areas for pedestrians and vehicles. Effective Kerb Heights for Blind and Partially Sighted People (word) 588kb Effective Kerb Heights for Blind and Partially Sighted People (pdf) 699kb The Design Trials research report. Link to 'Testing proposed delineators to demarcate pedestrian paths in a shared space environment' is available here to download. Focus group research We undertook in-depth research in to the experiences of blind and partially sighted people in shared surface streets in both the UK and in the Netherlands where 'shared space' advocates that shared surface streets work well. We assessed the risks and impact of these schemes and found that the safety, confidence and independence of blind and partially sighted people was undermined with parts of some towns becoming no-go areas. We commissioned RambØll Nyvig, an international design practice specialising in streetscape and public realm design, to consider how shared space street design could include the needs of blind and partially sighted people. In their report 'Shared space - safe space' they advised introducing a 'safe space' in any shared space street design. This 'safe space' is an area equivalent to a pavement where vulnerable pedestrians would feel safer, but would not prevent the remaining area being shared by other pedestrians, motorists and cyclists. The requirement for a 'safe space' was recognised in the 'Manual for Streets' report, published by the Government in 2007. Recognising the issue The Department for Transport has recognised Guide Dogs' concerns about the implementation of shared surface streets in shared space street designs. They have commissioned research to provide evidence based guidance on implementing shared spaces. Guide Dogs will participate in this research on the Sounding Board. The Disabled Person's Transport Advisory Committee (DPTAC) - advisors to the Government - has issued a statement sending a clear message to local authorities that in a shared space, kerbs must be retained until an effective alternative is found. DPTAC Statement can be found here. The Commission on Architecture and the Built Environment (CABE) has also recognised the issues in their publication 'Civilised Streets'. Guide Dogs has created an advisory booklet for local groups concerned about the use of shared surface streets in their town centres so that they can get involved in the development of any schemes. Briefing for Local Groups (word) 41K
Efficiency of conventional electricity generation in 1990 and 2003 In some Member States, the efficiency of combined heat and electricity production increased faster than that of electricity production alone. 1990 electricity data for Germany refers to 1991. For references, please go to www.eea.europa.eu/soer or scan the QR code. This briefing is part of the EEA's report The European Environment - State and Outlook 2015. The EEA is an official agency of the EU, tasked with providing information on Europe’s environment. PDF generated on 28 Dec 2014, 09:10 AM
This week in history: March 11-17 11 March 2013 This Week in History provides brief synopses of important historical events whose anniversaries fall this week. 25 years ago: Reagan deploys troops against Sandinistas On March 17, 1988, the same day as four defendants in the Iran-Contra scandal were indicted by a US grand jury, the Reagan administration dispatched US airborne and infantry forces into Honduras, north of the Nicaraguan border. In what was termed an “emergency deployment,” the 82nd Airborne Division and the 7th Infantry were sent on a no-notice basis, joining some 3,000 troops already on the ground, moving into position to protect a Contra military leader. Immediately, US forces conducted live-fire exercises and marched to withi n three miles of the Nicaraguan border. The operation, called “Operation Golden Pheasant,” represented a direct threat of war against the Sandinista government in Nicaragua which had, earlier in the month, sent troops into Honduras to overrun Contra munitions dumps in the San Andrés de Bocay region, where they were staged to supply Contra raids into Nicaragua from north. Sandinista forces immediately retreated back to Nicaragua and President Daniel Ortega made a national radio broadcast calling on the Nicaraguan people to “be in a state of combat readiness and prepared to repel, resist and defeat any attempted aggression by the United States against Nicaragua.” The same day in Washington, a US grand jury convened by Special Prosecutor Lawrence Walsh issued indictments against four key Iran-Contra figures: former National Security Adviser John Poindexter, former National Security Council aide Oliver North, retired Maj. Gen. Richard Secord and Iranian arms dealer Albert Hakim. Twenty-three charges were made, ranging from conspiracy to embezzle millions of dollars to the theft of traveler’s checks. The network of conspirators sold arms to Iran to generate funds for the illegal war against the Sandinista regime in Nicaragua, which had, in 1979, led a revolt to overthrow the longstanding US-backed dictatorship of Anastasio Somoza. Reagan publicly played down the charges and, 10 days later, pardoned all four defendants. 50 years ago: Baathists visit Cairo after Syria, Iraq coups Delegations of Baathists Party members from Syria and Iraq, fresh from successful military coups in the two Middle Eastern states, arrived in Cairo for “unity” discussions with the Pan-Arabist Egyptian regime of Gamal Abdel Nasser on March 15, 1963. However, no meaningful agreement was reached. The Syrian coup had taken place on March 8, and the Iraqi coup exactly one month earlier on February 8. The latter, which resulted in the assassination of the nationalist and Soviet Union-backed Prime Minister Qasim, was a clear foreign policy victory for the US, whose Central Intelligence Agency had advance knowledge of the coup and allegedly supplied the Iraqi Baathists with names of Iraqi Communists who were then killed or imprisoned. The Syrian coup was the third in the country in less than three years, a period of plots and intrigue that, as in Iraq, divided rival factions of the military and the elite—all purportedly nationalist and pan-Arabist—against each other. A leading role in the coup was played by the young Alawite officer Hafez al-Assad, who captured the nation’s most important air force base. As was the case in Iraq, Syrian communists were persecuted in the wake of the coup. Whatever the stated intent, the Baathists’ pilgrimage to Cairo proved unsuccessful. The United Arab Republic (UAR) of Egypt and Syria had collapsed less than two years earlier with the withdrawal of Syria. In each of the three Arab states, bourgeois factions linked closely to rival economic interests and ethno-religious groupings sought to perpetuate their own interests through the vehicle of the existing state structures and within the borders arbitrarily drawn in the desert by British and French imperialism. The Arab nationalists bitterly opposed socialism—though in each country they attempted to tap into the broad popularity of socialism by misappropriating the name—and in each case after taking power the Nasserites and Baathists, Pan-Arabists and nationalists repressed the organizations of the working class. 75 years ago: Old Bolsheviks executed at conclusion of third Moscow frame-up trial As the judicial monstrosity of the third Moscow frame-up trial reached its foregone conclusion, 18 of the 21 defendants were executed March 15, 1938, shot to death through the head and neck. Four of the most prominent surviving old Bolsheviks were put to death—Nikolai Bukharin and Alexei Rykov, the former leaders of the Right Opposition, and Nikolai Krestinsky and Arkady Rozengolts, once associated with the Left Opposition. In the words of Soviet historian Vadim Rogovin, “The third Moscow Trial was a laboratory of the big lie which exceeded in its cynicism and shamelessness all the previous judicial stage adaptations.” The big lie served a political purpose, Rogovin explained: “Many aspects of the third trial can be correctly understood only if one considers that the trial itself was part of a ruthless political struggle in which Stalin was continuously receiving devastating ideological blows from Trotsky.” Chief prosecutor and one-time right-wing Menshevik Andrei Vyshinsky delivered his closing speech for the prosecution four days before the executions. Spewing lies, calumnies and distortions in his mechanical monotone, his voice only occasionally rising in contrived indignation, Vyshinsky spoke for five hours. Two hours were devoted to consolidating the prosecution case against Bukharin. Only three of the 21 were spared death: the Old Bolshevik and close comrade of Trotsky, Christian Rakovsky, then 65 years old, D.D. Pletnev and Sergei Bessonov. Bessonov received 25 years in jail, instead of death, because his wholly fabricated evidence implicated the chief but absent defendant, Trotsky. According to Bessonov the co-leader with Lenin of the Russian Revolution was at the center of a hydra-headed plot to kill Stalin, bring down the Soviet Union, and restore capitalism within its borders. All three initially spared were later shot in 1941. Commenting on the sentences, the Times of London asked incredulously, “If most of the men in high office since Lenin died have been traitors, and the man at the head too simple to suspect it, who has been governing Russia all these years?” The same article mocked the prosecution’s portrait of Trotsky and his “established role as a prince of the outer darkness.” Trotsky himself followed the proceedings closely and wrote some 20 articles and commentary for the world press on the “Trial of the Twenty-One.” The former leader of the Red Army was moved to ask at one point during proceedings: “A totalitarian regime is the dictatorship of the apparatus. If all the key points of the apparatus are occupied by Trotskyists, who are at my command, why in this case is Stalin in the Kremlin, and I am in exile?” 100 years ago: Mass demonstration against conscription in France On March 16, 1913, a crowd estimated at more than 100,000 people rallied in Pré-Saint-Gervais, just outside Paris, in opposition to militarism, and moves to expand compulsory active military service from two to three years. The demonstration was organized by the CGT, the major trade union organization, the socialist party, or SFIO, and anarcho-syndicalist organizations. A statement by organizers denounced “the criminal eventuality of war [and] the new military burdens under which they want to crush the country.” It condemned the proposed “Three-years law,” rejected official claims that its purpose was to “consolidate the peace,” and warned that the legislation threatened “to make a military conflict between the German and French people inevitable.” Fearing that the demonstration could spark a broader movement, the government blocked a trade union demonstration in honor of the Paris Commune from taking place a few days later. In response to growing popular opposition to the proposed law, Premier Louis Barthou ordered street demonstrations against it to be banned. Conscript soldiers staged a small revolt in May, in opposition to the prospect of additional compulsory service. A law passed in 1905, at the height of the movement against the military’s anti-Semitic frame-up prosecution of Captain Alfred Dreyfus, reduced compulsory active military service from three years to two. The decision to reinstitute three years of compulsory service, made by the conservative Poincaré after he was installed as president in January 1913, was widely viewed as a response to the expansion of Germany’s military and naval capacities over the preceding year. The three-years law was passed in May. It continued to be a focal point of opposition to militarism, and was a major issue in the 1914 elections, on the eve of the First World War.
Cholesterol is a type of fat in the blood. High cholesterol is when there is too much of this fat. There are 2 types: High LDL cholesterol can raise the risk of stroke and heart disease. High HDL cholesterol can lower the risk of stroke and heart disease. Cholesterol is made in the liver and comes from food we eat. High cholesterol may be caused by one or more of the following: Things that raise the risk of high cholesterol are: High cholesterol levels usually do not cause symptoms. Cholesterol can be measured in the blood. The test is done as part of a regular screening. For healthy adults this may be every few years. Those with risk factors for heart disease may be screened more often. Children may be screened if they are obese or have a family history of high cholesterol. Cholesterol screening is part of a blood test that will include: A doctor can advise how often a person should be tested for high cholesterol. This is often based on the person's family and medical history. The goal of treatment is to lower cholesterol levels. This will also help to lower the risk for heart disease and stroke. Treatment options include: Statins are a medicine that may help lower cholesterol. They may reduce the risk of heart attack and stroke. Even when using medicine, diet and exercise are important. Other steps that can help lower cholesterol levels include: To help reduce the chance of having high cholesterol, talk to the doctor about: American Heart Association National Heart, Lung, and Blood Institute Dietitians of Canada Heart and Stroke Foundation of Canada Balder J, Rimbert, A, et al. Genetics, lifestyle, and low-density lipoprotein cholesterol in young and apparently healthy women. Circulation. 2018 Feb 20;137(8):820-831. High blood cholesterol. National Heart, Lung, and Blood Institute website. Available at: https://www.nhlbi.nih.gov/health-topics/high-blood-cholesterol. Accessed January 2021. Hypercholesterolemia. EBSCO DynaMed website. Available at:https://www.dynamed.com/condition/hypercholesterolemia. Accessed January 18, 2021. Prevention and treatment of high cholesterol. American Heart Association website. Available at: https://www.heart.org/en/health-topics/cholesterol/prevention-and-treatment-of-high-cholesterol-hyperlipidemia. Accessed January 2021. Last reviewed February 2021 by EBSCO Medical Review Board Marcin Chwistek, MD Last Updated: 1/18/2021
The US considers IDA as a tool in fighting extreme poverty. Nicolas Mombrial is Head of the Oxfam International office in Washington, DC. The World Bank is asking donors to refill the coffers of its fund for the world’s poorest countries, the International Development Association (IDA). Should Congress maintain its contribution to the fund, as a contribution to fighting global poverty? Oxfam, among other NGOs, has often been critical of World Bank policies and practices, but Oxfam supports this US investment in IDA. Why? Let me explain. When global health expert Jim Yong Kim became the World Bank’s president last year, he brought a breath of fresh air with him. His willingness to refocus the bank on eradicating poverty and fighting inequality is right on track. However, to do this, as well as enact other reforms at the World Bank, he needs the right tools. The IDA, covering the 82 countries where 80 percent of the world’s poorest people live, is definitely one of them. The replenishment of IDA, which happens every three years, will be a first test of the United States’ and other donors’ willingness to see Jim Kim’s vision succeed. Some argue to cut IDA or change it significantly because many poor countries have or will graduate to middle income status, thus no longer being eligible to receive funds from IDA. Regardless, IDA is going to need to continue to support fragile and conflict-affected states, where it costs up to three times more to fight poverty in these environments. IDA is also going to have to take up new challenges, like helping poor countries adapt to and mitigate the impacts of climate change. It’s true that IDA’s performance could improve. Notably, a better job needs to be done on reporting how the fund contributes to the two goals articulated in the bank’s “common vision”: ending extreme poverty and promoting shared prosperity at country level. Detailed and comprehensively transparent information about how the bank tracks and spends IDA money in poor countries is needed, to show how and where its loans and grants are making inroads on ending extreme poverty. Regardless, it is clear that IDA has had some impressive results in the last decade, for instance, helping ensure 65 million people receive access to health services and 8.5 million people get access to seeds and fertilizers. Also, Publish What You Fund ranks IDA second among 72 donors in terms of transparency and IDA receives good reviews from its peers, including a “very good value for money” rating from DfID’s 2013 Multilateral Aid Review. IDA, as an investment for the US in global development, fits with President Obama’s declaration in his 2013 State of the Union address to “join with our allies to eradicate such extreme poverty in the next two decades.” IDA can certainly complement the significant US investment and knowledge in sectors such as food security with the Feed the Future and electrification via Power Africa. Success will be found in the willingness and effectiveness of these joint efforts to reach people who are poor in the developing world. IDA’s role in coordinating donor assistance and its use of country systems have made it an intrinsically effective poverty-fighting instrument. And for the US, cutting down on aid fragmentation and delivery costs makes it a worthwhile investment.
By Ram Kumar Bhandari Nepal had an armed conflict (1996-2006) between the Government of Nepal and the Communist Party of Nepal (Maoist) and continues to be a country struggling with fundamental social and political change. The armed conflict in Nepal resulted in more than 1400 cases of enforced disappearances. Families of the disappeared demand that every case is resolved in accordance with their needs and desires. The root causes of conflict in Nepal are complex and deeply entrenched in the fabric and history of society. Substantive change is many years away. In the short term, there is a critical need for acknowledgement of marginalized and disadvantaged populations in Nepal. There are common needs of majority victims that everyone wants to know the truth and recognition, seeks psychosocial support, medical treatment, education and rehabilitation, income generation and employment opportunities for sustainable livelihood which are primary concerns for wider families. Victims feel that Nepal’s transition is guided by legal and political, rather than humanitarian, social or moral concerns, but they continue to hope for social justice. Nepal’s transitional justice process continues to be undermined by political elites’ defense of the country’s deeply entrenched system of impunity. Nepal’s conflict victims who live primarily in rural communities with limited economic resources and little or no educational background, continue to be a particularly marginalized group. Victim families’ needs and priorities are mostly excluded from the agenda of the political elites in the capital and district headquarters, in Nepal majority population live in rural villages as the major decisions are taken in the political centres, i.e. district headquarters and in the capital city that can not address those rural grievances. Human rights lawyers and Non-Governmental Organizations (NGOs) “advocating for victims” focus on prosecution of war criminals. Meanwhile, issues of social justice are ignored while political elites focus on amnesty for human rights violations that occurred during the armed conflict The power elites in Nepal have refrained from consulting victims and victims’ organizations during processes of transitional justice, and undermined victims’ desires for their social inclusion, livelihood, security and memorialization. In this joint article, we explore victim-centric, victim-led, family-based solutions to enable a sustainable future for conflict victims that still wait for truth, memory, justice and livelihood support.
Let’s be honest: we were all afraid. When COVID-19 started its sweep across Canada in 2020, we were startled, not only by the emergence of this unknown virus, but also by the sudden shutdown of society. Many of us willingly stayed home because this virus was so new that we didn’t want to risk succumbing to its manner of infection. And as businesses closed, so did schools. Suddenly public health became the governing principle behind every decision we made. Now, two years later, educators are just beginning to hit their stride as things are normalizing. The number one priority remains safety. However, hovering closely behind is the inevitable skill and learning gap that has emerged because of the shutdowns and the sudden shift to remote learning that, depending on where a student lives in Canada, may have taken them away from the physical school building for months. The learning gap Education researchers have long understood that any absence from school leads to learning deficits among students. The ‘summer slide’ or ‘summer setback’ has been well documented as student retention of literacy and numeracy skills in particular regress in the summer months when students are away from school. If the summer months lead to a learning gap, what kind of gap did the pandemic create? This is the question that many education researchers are asking, and the answers aren’t immediately forthcoming. In fact, some educators feel like the learning gap is the dirty little secret that provincial governments and school boards don’t want to deal with. Kelly Gallagher-MacKay, an assistant professor and researcher at Wilfred Laurier University, says the skill gap has been greeted with ‘deafening silence’ by the Ontario government. The University of Toronto’s Scott Davies agrees, adding, “We’re a little in the dark … We don’t know exactly where we stand and there may not be the same motivation if you don’t have data points to say there’s a certain need to be addressed.” The pandemic has been front and centre for two years, and so has the evolving learning gap. It takes planning to overcome these deficits and, according to some, no significant planning is taking place. To be fair, the hyper-vigilance when it comes to public health has put the needs of students on the backburner (whether you agree with this strategy or not). With safety as a priority for both the government and educators, the fact that students have been falling behind—while very much on the radar—became something that many felt could be dealt with down the road. Well, it appears that we are officially ‘down the road’ and efforts to address the learning gap need to begin in earnest. How bad is it? Scott Davies partnered with Janice Aurini of the University of Waterloo to co-author a report early in the pandemic that surmised that students of average ability lost as much as three months of learning progression due to the first school shutdown in 2020. Students with higher needs and a tendency to struggle saw their progression drop back by ask much as a year. They drew their conclusions by looking at the typical ‘summer setback’ that students of varying abilities experience from year to year and extrapolated that data over the course of the original pandemic shutdown combined with the summer break. Obviously, the gap has grown with subsequent school closures. Another study, this one by the University of Alberta’s Professor George Georgiou, found that students in the early grades were performing eight months to a year behind previous cohorts in reading comprehension as a result of school closures. Notably, and this point comes up time and again, the lag was worse for students with greater learning needs or from less privileged socio-economic backgrounds. According to Georgiou, “The problem is that, if these kids … keep going through grade levels without having their reading performance fixed, then they will be experiencing all sorts of other issues.” While Georgiou’s work was with younger students, similar assumptions can be made for high school students. Some observers have been particularly fatalistic about the current situation. A Harris Poll commissioned by Express Employment Professionals (EEP) found that over 80% of respondents fear the pandemic school shutdowns have created ‘a lost generation of students.’ Already lagging in skill development before the pandemic according EEP, the worsening skill gap could mean dire consequences for the future labour market. From their perspective, drastic intervention is needed immediately. Room for optimism Despite the apparent doom and gloom, Thomas D’Amico, Director of Education for Ottawa Catholic School Board, says there is room for some optimism. He believes that students learned important new skills over the course of the shutdowns, the most important being resilience, perseverance, self-discipline, and digital literacy. He says, “The learning loss is important for that targeted group of students that were completely disengaged or stopped attending. But for those that did participate, they have skills that they would never otherwise have had if it wasn’t for the pandemic. They will move along if we address their mental health and well-being. Then they’ll be in a position to continue to learn.” How are the provincial governments addressing the problem? Most provincial governments are heeding the call of the those demanding action. Alberta says it will spend $45 million on literacy and numeracy support for the early grades. Ontario has also earmarked money—$20 million—for reading assessments in the early grades. British Columbia has pledged $18 million to the students with the greatest needs. Meanwhile, Quebec is funding tutoring programs to help students get back on track. The one province that stands out in the fight against the learning gap is Prince Edward Island. PEI changed their curriculum, boldly acknowledging that school closures led to less learning. According to Tamara Hubley-Little of the PEI Department of Education, “Not only did we compact the curriculum in anticipation of interruptions, but in some cases, we took learning from the previous year and pulled it into the next year, so that students with learning gaps would be addressed.” The principal of Summerside’s Athena Consolidated School, Jerry McAuley, thought the curriculum revisions made sense because asking students to jump ahead without developing their skills would be too discouraging, adding, “It would be unfair for them. I think it would be unfair to ask the teachers to deliver that, without some of the foundational elements they really needed to be successful …” What do the teachers think? Educators have been open about their concerns regarding the learning gap. The CBC conducted a survey called Schooling Under Stress prior to the 2021-2022 school year. Over 50,000 educators across Canada filled in questionnaires dealing with the state of education. Their analysis was rather dire: more than half said students were not meeting the learning objectives set out in most courses. They admitted to grading more leniently, and worried that modified assignments and cancelled exams would set students back further. Even more startling was the fact that two-thirds of educators reported some students opted out entirely when classes went to remote learning. This position was echoed by a Canadian Teachers Federation survey of members who said a small minority of teachers reported that “almost all” of their students were checking in online. This was contrasted with 35% of teachers who said that had regular contact with a quarter of their students and 64% who said they were in regular contact with half of their students. Meanwhile, over 70% of the teachers in the CBC poll felt that some students will not be able to make up the learning that was lost while the schools were closed to in-person learning. Who was most affected? If you ask any educator who was affected the most by school shutdowns and the shift to remote learning, they’ll almost all say the same thing: students from new immigrant families; students with learning disabilities; students from lower socio-economic circumstances; students who lacked adequate access to technology; racialized students; and students whose parents had to work and couldn’t supervise their child’s studies. While this does not mean that all students in these groups suffered, it does paint a picture that, anecdotally, covers those who, when they returned to school after closures, demonstrated the largest learning gaps. According to Janice Aurini, the co-author of the study cited earlier, “Those kids who were already vulnerable—who already had summer learning losses and the challenge of having to catch up after summer vacation—are now entering school even further behind than they normally world have.” With in-person learning back on across Canada, plans to address the learning gap can be put in place. But first educators need to assess learners to create programming to meet students where they are at—whether that is lessons, units, or entire grade levels behind. Many teachers are very worried about the students who became disengaged from the learning process over the past two years. Studies suggest that the likelihood of a student graduating decreases exponentially for each week away from school. What does months away from in-person learning mean for entire school populations? Clearly, aggressive interventions will be necessary in order to maintain a respectable graduation rate. The counsellors job A lot of effort has gone into describing what educators already know: the pandemic created the conditions for a learning gap to emerge. However, what many observers fail to remember is that the education community is in the business of dealing with learning gaps. Educators are creative, committed and devoted when it comes to student advancement. In other words, this is a problem that can be solved over time. Guidance counsellors can be the ones to keep a level head as the learning gap is addressed over the next few years. While students, parents, teachers and administrators may express panic over the lost learning over the previous two years, counsellors can share some perspective. Here are a few ideas that may help: Trust your teachers Teachers are trained to creatively address the learning needs of students. They are experts (or at least should be) in assessment and evaluation. Trust that they will put their teaching skills to work and help students get back on track. Also, remember that educators of all stripes have the best interests of students at heart. This is not going to change because it is the guiding spirit of the education process. Support your teachers Sometimes teachers need to be reminded that, at times, they can only do so much. While students, parents and (perhaps) the administration is demanding immediate movement on the learning gap issue, demonstrate to your colleagues that these things take time. Also, many boards have encouraged hybrid learning—an extremely taxing form of double-teaching where teachers are asked to teach the students in front of them and the ones attending virtually at the same time. Hybrid learning is a recipe for teacher burnout. Sometimes when a teacher is during the teaching storm, they cannot see that they are limited to what they can do and cannot meet what is being demanded of them. If lenient marking was part of the remote learning environment, don’t be surprised if panic-stricken students (and their parents!) express concern over lower grades. Remind students of the challenges presented over the course of the pandemic and the learning that was likely thwarted by a lack of in-person education. The learning gap is an impediment—not an insurmountable obstacle—that can and will be addressed. Also, encourage them to remember the skills they developed when schools were closed to in-person learning: mainly resilience and determination. Guidance counselling if often about messaging. In this case, the message needs to be clear: The learning gap will be addressed—professionally, over time. While others surrender to worry and panic, counsellors can be the voice of reason that trusts that the education community will do what the education community does best: solve problems, address learning challenges and help students advance. By: Sean Dolan
This article describes how to control which categories are used when computing percentages and averages on tables. The categories used when computing statistics on tables are controlled via Value Attributes (see the example below). These can be edited in a variety of ways, including by: - Right-clicking on the blue or brown row/column name on the table and selecting Values. - Pressing the button in the Variables and Questions tab. - Right-clicking on a category that you wish to exclude and selecting Remove. - Automatically when using Set Question. - Via QScript. Individual categories are included or excluded as follows: - If a category is in the Missing Data column, it will then be excluded from all calculations. (Note that if you can also see a Missing Data row, this refers to observations that are marked as missing values in the original data file.) - If a category has its Value shown as NaN, it will be excluded from all numeric calculations (e.g., Average, Median), but will be included when computing percentages. Note that you will only see the Value column on data that can be represented numerically (e.g., it will not appear for Pick Any questions). - If it is a Date question, there is a completely different set of options (see Setting Time Periods for Date Questions). - If you have Pick Any or Pick Any - Grid questions, there is a column called Count This Value, which dictates the numerator when computing percentages. In the example below, for example, there are six unique categories in the data file: Like, Love, Neither like nor dislike, Hate, Dislike, and Missing data, and the settings shown tell us that: - Anybody with Missing data is excluded from any calculations. - The analysis will count up the number of people that have selected either Like or Love. That is, this number will be the numerator in any calculations of percentages (i.e., the bit that goes above the line in a fraction). - The base used in calculating percentages consists of everybody except those people that have Missing data. Thus, this particular example will compute Top 2 Box percentages scores (i.e., the proportion of people that said Like or Love from amongst all those people that selected one of the five categories). Excluding categories when computing percentages Pick One and Pick One - Multi questions Right-click on the category you wish to exclude and select Remove. This causes the table to be recomputed with this category removed. You can see which categories have been removed by right-clicking on a category and selecting Values and looking/editing the selections in the Missing Data column. Alternatively, to remove a category from the table without affecting the calculations on a table, you can right-click the category and select Hide. This removes the category label from the table but does not change any of the Missing Data selections. To undo the removal of categories from a table you can right-click the table and select Revert, or select Values and make changes to the selections in the Missing Data column. Excluding categories from averages Number, Number - Multi, and Number - Grid questions, and Statistics - Right and Statistics - Below In some cases, you may wish to keep a category showing in the table but remove its contribution to the Average, Sum, or other numerical statistics that are displayed in the Statistics - Right or Statistics - Below. For example, a rating scale question may include a Don't Know category, and you want to know about the number of respondents who have selected this category without those respondents contributing to the calculation of the average score for the question. To achieve this, right-click on your table and select Values, and enter a value of NaN in the Value column for that category. NaN stands for Not a number, and this value will not be used in the calculation of the average. The same method is used to change the way the Average, Sum, or other numeric statistics are calculated for tables showing Number, Number - Multi, and Number - Grid questions.
German homeland dialects are divided into three major groups: Low German, Middle German and Upper German. German-speaking immigrants to Kansas speak dialects from all three regions. These major regions are subdivided into their most prominent sub-regions above. These sub-regions are subdivided into even smaller important regional dialects (see below) on many printed dialect maps where more detail can be displayed. All these regions differentiate themselves in their pronunciation of certain consonants and vowels and sometimes in the loss of or differences in certain word endings. The lines that separate the dialect areas are called isoglosses. Although isoglosses are displayed as lines, they are actually transition areas where pronunciation gradually changes. Sometimes an isogloss conforms to a natural boundary, such as a river or mountain range. Sometimes it conforms to a current or former national border. Sometimes there is no obvious reason for a pronunciation isogloss to fall where it does. Rural regions can have their own dialects, as can cities. Dialects are also often identified by differences in vocabulary that may or may not cross over pronunciation isoglosses. The isoglosses displayed above indicate the following pronunciation differences: Where Low German has ik "I", Middle German has ich. Where Low German has maken "to make", Middle German has machen. West Low German speakers say mähet "they mow", while East Low German speakers say mähen. Where (Low and) Middle German has appel "apple", Upper German has apfel. Where (Low and) Middle German has pund "pound", Upper German has pfund. West Middle German pund is pronounced fund in the East Middle German region. West Upper German and North Upper German pronounce the 2nd person plural object pronoun "you (all)" as euch while the East Upper German speakers say enk. West Upper German speakers say mähet "they mow", while North Upper German speakers say mähe. Regional dialects found in the prominent dialect sub-regions: West Low German: Eastphalian, Westphalian, North Low Saxon, East Frisian East Low German: Mecklenburgish, Vorpommersch, Brandenburgish, East Pomeranian, Low Prussian, Plautdietsch West Middle German: Ripuarian, Mosel-Franconian, Rhein-Franconian, Hessian East Middle German: Thuringian, Upper Saxon, Silesian, High Prussian West Upper German: Alemanic, Swabian, Alsatian North Upper German: East Franconian, South Franconian East Upper German: Bavarian-Austrian, North Bavarian, South Bavarian Last updated August 26, 2009
Pas disponible en Français.Disponible en Anglais. The term 'sustainable development' was popularised by the World Commission on Environment and Development (WCED) in its 1987 report entitled Our Common Future. This book is also known as the Brundtland Report, after the Chair of the Commission and former Prime Minister of Norway, Gro Harlem Brundtland. The aim of the World Commission was to find practical ways of addressing the environmental and developmental problems of the world. In particular, it had three general objectives: Photo:© Yann Arthus-Bertrand/Earth from above/UNESCO Our Common Future was written after three years of public hearings and over five hundred written submissions. Commissioners from twenty one countries analysed this material, with the final report being submitted to the United Nations General Assembly in 1987. Our Common Future reported on many global realities and recommended urgent action on eight key issues to ensure that development was sustainable, ie. that it would satisfy 'the needs of the present without compromising the ability of future generations to meet their own needs'. These eight issues were: - Population and Human Resources - Food Security - Species and Ecosystems - The Urban Challenge - Managing the Commons - Conflict and Environmental Degradation These issues - and many others like them - were discussed at a major international conference in Rio de Janeiro, Brazil, in June 1992. Known as the United Nations Conference on Environment and Development - or more simply as the Earth Summit - this meeting brought together nearly 150 Heads of State where they negotiated and agreed to a global action plan for sustainable development which they called Agenda 21. The Earth Summit was also attended by nearly 50,000 official observers and citizens from around the world who met in a wide range of official and community-based councils and seminars at a Global Forum. As well as Agenda 21, four new international treaties - on climate change, biological diversity, desertification and high-seas fishing - were signed in the official sessions. In addition, a United Nations Commission on Sustainable Development was established to monitor the implementation of these agreements and to act as a forum for the ongoing negotiation of international policies on environment and development. Agenda 21 has been the basis for action by many national and local governments. For example, over 150 countries have set up national advisory councils to promote dialogue between government, environmentalists, the private sector and the general community. Many have also established programmes for monitoring national progress on sustainable development indicators. At the local government level, nearly 2000 towns and cities worldwide have created their own Local Agenda 21plans. |ID: 3994 | guest (Lire)||© 2002 - UNESCO - Contact|
When the Manchu ruled China during the Qing Dynasty, certain social strata emerged. Among them were the Banners, mostly Manchu, who as a group were called Banner People. Manchu women typically wore a one-piece dress that came to be known as the cheongsam. The qipao fitted loosely and hung straight down the body. You can pull on cheongsam of PLUS SIZE DRESSES Under the dynastic laws after 1644, all Han Chinese were forced to wear a queue and dress in Manchurian qipao instead of traditional Han Chinese clothing, under penalty of death. In the following 300 years, the qipao became the adopted clothing of the Chinese (though it cannot be considered as the traditional dress of Chinese, as it was forced upon them), and was eventually tailored to suit the preferences of the population. Such was its popularity that the garment form survived the political turmoil of the 1911 Xinhai Revolution that toppled the Qing Dynasty. Silk is a natural protein fiber, some forms of which can be woven into textiles. The best-known type of silk is obtained from cocoons made by the larvae of the mulberry silkworm Bombyx mori reared in captivity (sericulture). The shimmering appearance for which silk is prized comes from the fibers' triangular prism-like structure which allows silk cloth to refract incoming light at different angles. "Wild silks" are produced by caterpillars other than the mulberry silkworm and can be artificially cultivated. A variety of wild silks have been known and used in China, South Asia, and Europe since early times, but the scale of production was always far smaller than that of cultivated silks. They differ from the domesticated varieties in color and texture, and cocoons gathered in the wild usually have been damaged by the emerging moth before the cocoons are gathered, so the silk thread that makes up the cocoon has been torn into shorter lengths. Commercially reared silkworm pupae are killed by dipping them in boiling water before the adult moths emerge, or by piercing them with a needle, allowing the whole cocoon to be unraveled as one continuous thread. This permits a much stronger cloth to be woven from the silk. Wild silks also tend to be more difficult to dye than silk from the cultivated silkworm. We can PLUS SIZE DRESSES for you
The writer of the Hebrew letter defines faith thusly: "Now faith is the substance of things hoped for, the evidence of things not seen". (Hebrews 11:1 KJV) And that also describes hope. There are a number of definitions given by various dictionaries but one of the most accurate and meaningful seems to be this: 'A desire with confident expectations'. The Holy Bibles tells us the Christians reason for living the Christian life. There is a wonderful reward which awaits the faithful Christian. A wonderful life forever with none of the negative aspects found in the human's physical life. Those of us who do not deny that there is an almighty God whose Son came to this earth and died that we might have access to such a life, hope for that reward. We desire it and have confident expectations of obtaining it. It is more than a wishful thinking. We have hope of it because we have the promise of that almighty God if we obey Him. And there is ample evidence of His existence and His authentic revelation in the Holy Bible. That is the evidence mentioned in the quote from the Hebrew letter above. Perhaps the clearest and most concise verse containing His promise is found in the book of Revelation: "Fear none of those things which thou shalt suffer: behold, the devil shall cast some of you into prison, that ye may be tried; and ye shall have tribulation ten days: be thou faithful unto death, and I will give thee a crown of life." (Revelation 2:10 KJV) And you see, Christians are the only ones who have this hope. It is not available for those who do not believe in Jehovah God, His Son, or His revelation to man. And to say that you believe in those is inadequate. You have to live a dedicated life to His service. Notice that promise above and consider the condition which precedes the reward: "Be thou faithful unto death...". And that does not simply mean attend services of His church until your death arrives. It means to live an obedient life before Him, even if it means also giving your life. Many of those early Christians did exactly that, they gave their life rather than denounce Jesus Christ, the Son. A notable reformation era gospel preacher, Alexander Campbell, in a public debate with an atheist made this remark which plainly describes it. He said, "Sir, you are like that oxen over there, you have no hope". This writer has had numerous discussions with atheists, and I have generally asked them, "Why do you desire to have me to disclaim my belief in God? What benefit have you for me to exchange for the loss of my hope of eternal life?" Their only response is that they feel one should know the truth. They have no value to their beliefs and they have no proofs of them either. For one to relinquish their faith in God not only causes them to lose their eternal reward, but brings upon them again, the fear of death. The atheist does have a reward for his thinking, and that is he does not have to humble himself before the God of heaven in which he does not believe. But unfortunately, that is a very temporary reward and once his life is over he will understand that a brief reward is far from comparable to that of eternal life in glory and peace. The Bible says several times, "They have their reward." "Therefore when thou doest thine alms, do not sound a trumpet before thee, as the hypocrites do in the synagogues and in the streets, that they may have glory of men. Verily I say unto you, They have their reward." (Matthew 6:2 KJV) "And when thou prayest, thou shalt not be as the hypocrites are: for they love to pray standing in the synagogues and in the corners of the streets, that they may be seen of men. Verily I say unto you, They have their reward." (Matthew 6:5 KJV) "Moreover when ye fast, be not, as the hypocrites, of a sad countenance: for they disfigure their faces, that they may appear unto men to fast. Verily I say unto you, They have their reward." (Matthew 6:16 KJV) The apostle Paul told the Christians in Colosse that their hope was laid up for them in heaven, by which he means that their reward is reserved for them, life eternal with God the Father and His Son. (Colossians 1:5) If you are living in the light of God's word, continue to do so and be thankful for the hope that is in you and never let anyone put doubt into your mind concerning it. You have the Biblical proofs of creation, life, death and eternity while those of whom will try to shake your faith have no hope, no proofs of any of the facts just stated, only a rebellious attitude towards God.
The New England historians' achievement entitles them to a far more important place in the American Renaissance than they have usually been given. Their fifty volumes on Spanish, Dutch, Franco- and Anglo-American history are not a curious by-product but a central expression of romantic thought in America. They found in romantic conventions a way of giving the Past artistic order and contemporary moral significance. Confronted with real historical characters, they did what any historian who wants to portray individual character is likely to do: they turned to the vocabulary of contemporary literature. That vocabulary allowed them to adapt romantic conceptions of the hero to the values of a liberal, republican society; to concentrate on representative types who had drawn much of their strength from their "natural" relationship with the People. In this terminology the historians also found the means to express the moral drama that the Unitarian, along with most liberal Americans, saw in history. They did not ignore the influence of "forces"--even of economic forces--on history, nor did they ignore the importance of struggle. Almost every one of their works dramatizes that conflict between "artificial" and "natural" principles which they all regarded as the inevitable condition of progress. Even in Motley and Bancroft this conflict did not mean a decisive battle between absolute good and absolute evil. The historians achieved their greatest success when they dramatized the struggle by portraying a wide range of types--from the "savage" to the sensual reactionary--whose differences might be emphasized by the very organization of the history. When the romantic placed Indian, priest, and Catholic king against the progressive hero, he not only reinforced mid-nineteenth-century conceptions of destiny; he clarified the meaning of the "natural," and he prepared for a more convincing resolution of the conflict than historical romancers like Cooper had been able to achieve. For although the historical hero's victory, like that of Cooper's Edward Effingham, represented a compromise between the extremes, the historical hero was real--a Although their reliance on contrasting types restricted the historians to limited kinds of subjects, it gave their best histories an order and a significance that more recent, "scientific" monographs too often lack. Despite their inconsistencies and their peculiar terminology, their version of moral and historical truth often has an enduring value. The economic, religious, and political errors of Mexico, of Spain in the Netherlands and at home, of France in Canada and England in the colonies, seem as clear to modern historians as they seemed in nineteenth-century America; the romantic historians made the errors memorable by dramatizing them in the context of moral vigor and torpor. The conventional methods also limited the kinds of traits that might be delineated in the histories. But although they could not suffice to explain individual psychology, they gave Motley's villains, Prescott's Montezuma and Cortés, and Parkman's La Salle and Montcalm a deeper reality than these characters might otherwise have had. To individual portraits, scenes, and incidents, moreover, they gave a human truth that transcends all the obvious inadequacies. The pictures of La Salle amid the wreckage of his last voyage, of Montezuma sitting in chains, of the Prior of Saint Vaast bribing the Malcontents--these are unforgettable not in spite of the conventional ideas, but because of them. Clearly, then, the New England histories suggest the impossibility of divorcing literary methods from historical theory. Although one might like to know just how deliberately the historians imposed the romantic formulas on the historical record, the question seems unanswerable. The assumptions on which the formulas were based had already pervaded the historians' conception of the Past before they began writing; the assumptions, indeed, had affected each man's decision to write history in the first place, and they had also helped to attract each of them to his particular subject. One cannot separate the New England case against Rome from the literary types in which the case was embodied. It is precisely because of this relationship that the New England historians belong not on the periphery of the American Renaissance, but at the center. Their histories provide a foundation in documented fact for the tension between form and essence, head and heart, civilization and Nature, that preoccupied so many of their contemporaries. Their three greatest works dramatize that conflict by exploiting the most effective conventions of the period without belying its highest standards of historical research. [End of book]
When the Italian composer Giacomo Puccini presented his great opera “Madame Butterfly” in his first performances in “Scala Milano”, he was at the beginning of 1904 forty-six years old and in the middle stage between his first great works and the masterpieces of the last years of his life … His music became. To a large extent, he was not interested in any intellectual interpretations that critics and scholars could attribute to his “Orientalist” work. In fact, Puccini was not interested in interpretations of this kind, who would not have thought that his great opera art had ideological dimensions. His previous works such as “Manon Lascaux”, “Bohemian” and “Tosca” were works full of the beauty of music, the splendor of songs, and the honesty of Clothes and decorations, nothing more, nothing less. As for the explanations, it later, and perhaps after the departure of this great musician in 1924, crowned with an overwhelming glory that his share was in Italy, but also in France and, of course, the whole of Europe. One of the hallmarks of that glory was that it was always distributed to more than half a dozen great operas which are still performed loud and successful to this day, but it was Madame Butterfly’s share that it received particular attention in which the aesthetic interest combined with the intellectual interest based on what happened in this aspect.Throughout the twentieth century, especially when the East revolted against what it faced, while the West dealt with that Orientalism on the basis of a complex of guilt it felt. Between spirituality and materialism Surely “Madame Butterfly” in itself allows a kind of great drowning in this knot of guilt, which can be said to be its tragedy in terms of the spirituality of the East and the “materialism” of the West in accordance with Kipling ‘s famous and said, “East is East and West is West, and they will not meet.” There is no doubt that “Madame Butterfly” seems to be at least partially an application of that saying, even if it seems to us today a pitifully funny. And in any case, she seemed crying and catastrophic at the time when Puccini (1858-1924) witnessed the madness of a world that began to enter modernity and with it all sorts of commonality, sarcasm and a “happy nightmare”. we can certainly understand the failure of the first performances of “Madame Butterfly”, and it was in fact a staggering failure.Puccini would later say that he did not know something like he had in his life, and he would not know in the remaining years of his life what it looks like.But it was a transient failure, as this opera soon received a good amount of modification, and therefore it changed from an opera in two long chapters to a work of three chapters of medium length without any change in his music or in his songs, which soon becomes after a while on every lip and tongue. abundance in variety ie films and so on are quoted. Let us say from now on that what helped in all this subsequent spread was precisely the historical political circumstances that propelled Japan itself forward. And let’s not forget here that “Madame Butterfly” is almost ultimately a Japanese opera, even though the “male hero” in it is an American. It’s a love story of one party anyway – Japanese – and it takes place in Japan as we know it. That’s why his atmosphere is Japanese, his decorations and his clothes are Japanese, not to mention that his heroine is Japanese, the beautiful Chiu Chiu-san, nicknamed “Butterfly”, the title of the opera, his heroine, his victim , and perhaps the representative of Oriental passion in it. In the end, the story revolves around that love that will be fatal to this Japanese beauty, especially when she finds herself forced, according to a certain geostrategic position, to leave her child after a long wait for her American “lover” Pinkerton , the father of the child who in the last chapter will seem to us to know nothing of the tragedy of that Oriental woman who fell victim to his passing wanderings. victim of gloomy chatter Here, so that things do not look like crossword puzzles, we must return to the same story. The story, originally adapted from a short story by John Luther Long, revolves around a US Navy officer, Pinkerton, who is introduced to a sweet girl by a broker in Nagasaki, Japan, as Chiu Qiu-Sun, nicknamed Butterfly . According to the traditions that apply in such cases, Pinkerton keeps his marriage to the butterfly between grandpa and joking at a time when he does not stop touching everything in the situation of endearing humor, especially because the butterfly gives him tenderness and boundless love. and give lives. for three years with him, after which he had to leave and he changed Dar so that his relationship with his “lover” would produce a son who would give birth to him later in his absence, and she surrendered to him. In fact, Pinkerton had previously written a letter to the butterfly telling her about everything, explaining the reality of the situation, and that he never intended to return or form a family, and he thanks her for all the affection she has made for him. . But the message did not reach the butterfly, because his carrier, the employee at the US Consulate, when he wanted to deliver it and met the butterfly, touched her from his love for her lover and she waited for him, which made him sorry made to tell her the truth, and so he left her with a broken heart and all hope for Pinkerton’s return to reunite that little family, and she obtained all that The happiness she had promised herself. The return of the beloved The truth is that Pinkerton will return to his ship that reached Nagasaki, but he will return with his American wife to show her this country in which he lived for years as a joke and entertainment and that woman with whom he was married. For Pinkerton, the story is still a joke to tell. But the joke stops when he finds out he has a son of that butterfly. And to complete the story in his simplistic American way, he will not have to admit his fatherhood to the child, but rather that his wife will fall in love with the child and therefore he will ask Pinkerton, always between jokes and serious and as a practical man who can never realize the meaning of love and the emotions of that woman who will surprise him a lot to consider herself a real woman for him, he will ask the butterfly to help the child in the protection of his wife, who will take him to America and raise him as if he were her own son. His father is more entitled to him according to the laws and according to the power relations. And the story actually ends with the devastating despair that will hit the butterfly as it passes the child on to his new “mother”, only to discover that she has lived her whole life to deceive herself. And so the last chapter ends with the butterfly killing itself and losing everything, even hope. This section contains related articles, placed in the Related nodes field. Musical skills and two difficult deaths From the outcome here, Puccini made his music for this tragic work a highlight in the expression of his musical skill, even though some of his use of Japanese rhythms in some moments when the heroine worries her worries and long waits with escalating escalation Express. stage by stage, became boring and mixed. . However, this did not prevent “Madame Butterfly” from eventually becoming one of the most important opera works in Puccini’s history, nor did it prevent Chiu Qiu-sun (Butterfly) from joining Mimi, the central character in his opera “Bohemian” (1896) was. ). ) of the most humane and passionate female opera characters we see ending in death, one of them a victim of the fates that killed her, and the second a victim of a love that was delusional in a time when love a memory of time eating and drinking.
Skip to comments. Hubble Sees Jupiterís Red Spot Shrink to Smallest Size Ever | May 15, 2014 | Bob King on Posted on 05/15/2014 1:08:00 PM PDT by BenLurkin Recent Hubble Space Telescope observations confirm that the spot is now just under 10,250 miles (16,500 km) across, the smallest diameter weve ever measured, said Amy Simon of NASAs Goddard Space Flight Center in Maryland, USA. Using historic sketches and photos from the late 1800s, astronomers determined the spots diameter then at 25,475 miles (41,000 km) across. Even the smallest telescope would have shown it as a huge red hot dog. Amateur observations starting in 2012 revealed a noticeable increase in the spots shrinkage rate. The spots waistline is getting smaller by just under 620 miles (1,000 km) per year while its north-south extent has changed little. In a word, the spot has downsized and become more circular in shape. Many whove attempted to see Jupiters signature feature have been frustrated in recent years not only because the spots pale color makes it hard to see against adjacent cloud features, but because its physically getting smaller. (Excerpt) Read more at universetoday.com ... KEYWORDS: jupiter; redspot posted on 05/15/2014 1:08:00 PM PDT Global warming is to blame. Or Bush. Man made global warming no doubt... posted on 05/15/2014 1:09:26 PM PDT by Common Sense 101 (Hey libs... If your theories fly in the face of reality, it's not reality that's wrong.) NASA’s fault. A record number of satellites sent to Jupiter, flyby’s Pioneer and Voyager stripping the planet of some its energy. Yep - man-caused, no doubt about it. posted on 05/15/2014 1:11:29 PM PDT posted on 05/15/2014 1:12:22 PM PDT (Liberty or Big Government - you can't have both.) its that evil CO2...just ask algore posted on 05/15/2014 1:13:27 PM PDT (America needs more real Americans.) To: Common Sense 101 posted on 05/15/2014 1:13:35 PM PDT (The media must be defeated any way it can be done.) posted on 05/15/2014 1:13:38 PM PDT It’s all those SUVs on Jupiter posted on 05/15/2014 1:14:29 PM PDT Are you implying that the Great Red Spot (GRS) was in the pool??? posted on 05/15/2014 1:15:38 PM PDT (The Tree of Liberty Thirsts) posted on 05/15/2014 1:18:48 PM PDT Smalles EVER? How would we know? Where humans around 100 million...1 billion years ago? posted on 05/15/2014 1:22:20 PM PDT It’s those Koch Brothers. Man has to be causing climate change on Jupiter by observing it with a telescope. That makes as much sense as what the alarmist are telling us here on Earth. posted on 05/15/2014 1:27:06 PM PDT (French-like Democrats wave the white flag of surrender while we are winning) posted on 05/15/2014 1:27:35 PM PDT ("Just look at the flowers, Lizzie. Just look at the flowers.") Hubble Sees Jupiters Red Spot Shrink to Smallest Size Ever Scientists are attempting to determine if there's any connection between the shrinking of the red spot, and the discovery of a gigantic tube of Clearsil found orbiting the planet. “IT’S SHRINKING!!! IT’S SHRINKING!!!!” To: Lee'sGhost; dfwgator posted on 05/15/2014 1:28:56 PM PDT (The difference between a Humanist and a Satanist - the latter admits whom he's working for) Jupiters Red Spot Shrink to Smallest Size Ever Had the same observation been observed on Uranus, I would have breathed a sigh of relief.......... posted on 05/15/2014 1:33:00 PM PDT by Hot Tabasco (Under Reagan spring always arrived on time.....) BLM just ordered another thousand sub-machine guns to protect the spotted jumping tortoise from the shrinking red spot. posted on 05/15/2014 1:35:13 PM PDT by Political Junkie Too (If you are the Posterity of We the People, then you are a Natural Born Citizen.) posted on 05/15/2014 1:38:19 PM PDT (Remember the River Raisin.) I wonder if the Jupiterians are running around all panicked and issuing white papers and TeeVee talk shows about “Spot Shrinkage: The End Of Life On Jupiter!” Shouldn’t that be “...ever observed”? posted on 05/15/2014 1:54:24 PM PDT by hal ogen (First Amendment or Reeducation Camp?) Even the smallest telescope would have shown it as a huge red hot dog Weiner shrinkage is apparently a Universal problem .... posted on 05/15/2014 2:11:46 PM PDT FIRST THE GLACIERS, NOW JUPITER!!!! posted on 05/15/2014 2:13:08 PM PDT (Early 2009 to 7/21/2013 - RIP my little girl Cathy. You were the best cat ever. You will be missed.) It’s been consistently shrinking since it was discovered, so isn’t anytime anyone looks at the GRS, it’s the “Smallest Size Ever Seen”? posted on 05/15/2014 2:14:24 PM PDT (There's been a huge party. All plates and the bottles are empty, all that's left is the bill to pay) CO2 must be causing this! posted on 05/15/2014 2:14:56 PM PDT (Vote for Conservatives not for Republicans!) Ancient astronaut theorists believe that the Great Red Spot is a portal to an alternate universe and may be where ancient aliens entered our solar system. If so, the shrinking of this object may signal a return of these aliens and an uncertain future for mankind on this planet. (just made that up, don’t panic) posted on 05/15/2014 2:24:14 PM PDT by Peter ODonnell (It wasn't this cold before global warming) To: Peter ODonnell Alternatively, Jupiter’s Herpes is going into remission. posted on 05/15/2014 2:28:06 PM PDT ("Don't compare me to the almighty, compare me to the alternative." -Obama, 09-24-11) posted on 05/15/2014 2:30:13 PM PDT All these worlds are yours, except Europa...attempt no landings there.... posted on 05/15/2014 2:32:13 PM PDT (It was never Bush's fault...Spock's messing with red matter was what screwed us all up!) Ever? Really? In all of Jupiter’s existence it’s never been smaller? Or is it the smallest we’ve observed? The editors who write headlines need to learn English. posted on 05/15/2014 2:56:10 PM PDT by Vermont Lt (If you want to keep your dignity, you can keep it. Period........ Just kidding, you can't keep it.) You may be on to something, I think that damn two by four from 2001 and 2010 is somehow behind this. posted on 05/15/2014 3:14:10 PM PDT by The Antiyuppie ("When small men cast long shadows, then it is very late in the day.") Somewhere, someone is worrying about this. Somewhere, someone is worrying about this. LOL, oh yeah! posted on 05/15/2014 3:25:12 PM PDT (One who commands, must obey.) To: Vermont Lt My thoughts as well. Local weather guy said temps yesterday at one beach community were “the highest ever”. posted on 05/15/2014 3:26:49 PM PDT (This is not a statement of fact. It is either opinion or satire; or both.) it must be man made climate change posted on 05/15/2014 5:09:51 PM PDT To: BenLurkin; brytlea; cripplecreek; decimon; bigheadfred; KoRn; Grammy; steelyourfaith; Mmogamer; ... Thanks BenLurkin, an APoD extra. posted on 05/18/2014 3:08:51 PM PDT Hurricanes eventually wind down on earth, so.................... posted on 05/18/2014 3:13:41 PM PDT by The Cajun (tea party!!!, Sarah Palin, Mark Levin, Ted Cruz, Mike Lee, Louie Gohmert......Nuff said.) Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works. FreeRepublic.com is powered by software copyright 2000-2008 John Robinson
The LNER Electric Bo-Bo Class EM1 (BR Class 76) Locomotives The scheme to electrify the Manchester Sheffield and Wath line using overhead 1500V DC was announced in November 1936. The 1500V system had been recommended by the Ministry of Transport in 1927, and was already in use on the Manchester, South Junction & Altrincham line (MSJ&A). The initial proposal included 69 mixed traffic (Class EM1), 9 express passenger (Class EM2), and 10 banking (Class EB1) locomotives. The mixed traffic EM1 design was started first, and tenders were invited at the end of 1937. In January 1939, electrical equipment for 70 locomotives was ordered from Metropolitan Vickers Electrical Co. with final assembly to take place at Doncaster Works. The electrification scheme was halted due to World War 2, and the initial order was reduced to one prototype in November 1939. Contemporary accounts note that the EM1 had an unusual design. The two 4-wheel bogies were linked by a coupling. The drawgear and buffers were mounted on the outer ends of the bogies, resulting in no transmission of traction or braking forces through the locomotive body. A similar system with the drawbar through the bogies had already been supplied by Metropolitan Vickers to South African Railways. Gresley was interested in the mechanical design of the new locomotives, and visited both South African Railway and Metropolitan Vickers. Gresley was reportedly very impressed with the system after these visits, and adopted it for the new locomotives. Each axle was driven by a 467hp motor through 17:70 reduction gear. The two motors on a bogie were connected in series, so that they received 750V DC. The two bogies could then be controlled in series or parallel. An EM1 would be started with the bogies wired in series. Acceleration was achieved by successively switching out the fifteen starting resistances. Further speed could be achieved by switching from series to parallel, and returning the resistances back into the circuit. In addition to the fifteen resistances, an extra control could be applied by diverting the traction motors' field current through another resistance. This allowed for a total of 19 'notch' settings and two 'gears' (series/parallel). The choice of a Bo+Bo (4 wheel axle) wheel arrangement was primarily due to the LNER's concerns regarding the capital cost of the project. This quickly proved to be a serious mistake due to excessive weight transfer when accelerating. When starting from a stop, it was possible for the reduction in the load on the leading axle to result in wheel slip and a loss of adhesion. In an attempt to compensate for the loss of adhesion on these axles, a 'weight transfer switch' could reduce the field current in the leading motors on each bogie, but this proved ineffective in operation. As well as reducing the weight transfer problem, a heavier Co+Co locomotive would have also helped with braking on the long descents found on the Woodhead route. Prototype No. 6701 was completed in August 1940. Initial trials involved towing No. 6701 with a steam locomotive on the East Coast main line between Doncaster and Retford. These trials found an extremely uncomfortable natural period of oscillation at around 20mph. Various alternative spring arrangements were tried until better riding conditions were obtained. No. 6701 was then officially added to stock in September 1941 and worked electrical trials on the MSJ&A. These trials typically included loaded wagons or empty coaches. For regenerative brake tests on the level, a gradient was simulated by attaching two J39 0-6-0s which worked full-out in the opposite direction. The MSJ&A trials tested the electrical equipment but found further ride problems at about 20-25mph. No. 6701 returned to Doncaster on 14th October 1941, and entered storage until after the war. No. 6701 came out of storage in 1947, and was renumbered as No. 6000. Its suspension gear was altered, maintained, and thoroughly cleaned; resulting in improved riding during further trials on the East Coast main line. No. 6000 was then shipped to the Netherlands State Railway in September 1947, and was running by the 15th. By November 1947, No. 6000 had clocked up 10,000 miles but the ride quality was already beginning to deteriorate again. Various alterations were tried, and a satisfactory solution was only reached in March 1948 when drastic changes were made to the bogies, upper-structure springing, and the bogie coupling. These changes were very successful, and were still in place when No. 6000 returned to Britain in February 1952. On its return, No. 6000 was officially named Tommy - an affectionate name that the Dutch had used for it, in reference to their recent experiences with British liberation forces. Authority was given to build the production EM1 locomotives in July 1946, but the order was not placed until after Nationalisation. Twenty four were ordered from Darlington in January 1948, and fifty seven were ordered from Gorton in July 1948. The Darlington order was later cancelled, but the Gorton engines entered service between October 1950 and August 1953. Mechanical parts and final completion was performed at Gorton, but the electrical equipment and traction motors were fitted at Dukinfield. The most obvious external changes were the cab door and side window positions. The production locomotives used the modified spring and bogie arrangement adopted in the Netherlands. The last ten EM1s were also fitted with Timken roller bearing axleboxes, in place of plain white metal bearings. Nine of the production EM1s were fitted with Bastian & Allen steam generators to heat passenger trains. Five more were fitted with steam generators in 1955. Each boiler had three heating circuits that were automatically switched off at 61, 63, and 65 psi. Each circuit consisted of sixty quartz-covered 2KW heating elements. This gave a total rating of 360 KW and a steaming rate of 1,000lb/hr. The boilers were taken out of use over time. Unused boilers tended to be left in situ due to the need to replace them with a balance weight. The boilers were also known to occasionally work loose and become a hazard, so some were eventually removed. A number of EM1s were scrapped with their boilers intact. Before June 1951, running-in trials were performed on the newly-opened 1500V Liverpool Street to Shenfield line. The overhead wires were energised in the Wath area in June 1951, and both trials and crew training moved to the Wath shed. Production EM1 locomotives entered full service in February 1952, operating the Wath to Dunford Bridge stretch. Trains were operated by two locomotives - one hauling, and one banking. Weight transfer on the Bo+Bo wheel arrangement led to severe slippage on the leading axles, and train loads had to be reduced from 850 tones to 750 tons. The weight transfer problem also led to some of the bogie centre pins being bent or fracturing. A series of trials between November 1952 and January 1953 attempted to find a cure for these problems. The trials used the London Midland Region and Eastern Region dynamometer cars, and attempted to quantify the forces involved and attainable factors of adhesion. A maximum load of 13.6 tons was measured on the bogie pin. Laboratory tests found that this load resulted in a stress of 10.3 tons / sq.in. on the hollow steel bogie pins then being used. A forged steel pin would reduce the stress to 4.5 tons / sq.in, and these were quickly adopted as standard on the EM1s. Weight transfer could also be reduced by slowly switching out the starting resistances. Normally, the starting resistances are switched out of circuit quickly so that they did not overheat. Slipping was found to be much less likely if the starting current was not allowed to exceed 660 amps. Although this could result in the starting resistances getting quite hot, no damage was detectable. It was assumed that the original design instructions included a large safety margin. The new Woodhead tunnel opened shortly afterwards on 3rd June 1954. Further trials in early 1955 confirmed that the EM1s met their contract specifications, and that they were ideal for freight operations. Cab riding was poor over 50mph but none of the test freight trains exceeded 43mph. The EM1s were fitted with two pantographs. The original plan was to only use the rear pantograph, and keep the front one incase of damage. The Woodhead route was plagued by serious ice problems, soot from steam engines, and unusual atmospheric conditions; all of which contributed to poor and often intermittent pickup from the overhead line. This experience quickly led to the use of both pantographs simultaneously. This greatly improved the pickup and is thought to have also significantly lengthened the life of the overhead wire. No. 6701 was initially known as "0-4+4-0 Mixed Traffic" when it was built. The designation of Class EM1 was introduced in September 1945. British Railways gave them EM1 engines the classification of Class 76 in 1968. In 1974, suffix letters were added to identify four variants. "76-aV" referred to the basic EM1 design with no heating boiler but with a vacuum brake. "76-bX" referred to EM1s fitted with dual brakes. "76-cV" referred to the basic design with a heating boiler and vacuum brake. "76-dA" referred to locomotives with the air brake modification. The EM1s primarily hauled coal over the Woodhead route. All of the EM1s were officially allocated to Reddish depot, but they were usually distributed to depots along the route as required. The EM1 locomotives proved adequate for passenger services between Sheffield and Manchester, and they took charge of this traffic after the EM2s were withdrawn in 1968. The official end of passenger services in January 1970 led to the first two withdrawals in March 1970. These were Nos. 26035 & 26042 which sustained damage to their high voltage compartments and had been in storage since 1968. In reality, a one-way EM1-hauled passenger service from Manchester to Penistone continued for a couple of years after this date, in the form of a newspaper train. During the 1970s, the Woodhead route became a freight-only route and the EM1s (now Class 76s) hauled numerous merry-go-round coal services from the Yorkshire coal field to Fiddlers Ferry power station near Warrington. In 1966-70 and 1973-7, thirty Class 76s were modified to allow for multiple workings and air-braking on the heavy merry-go-round trains. As gradual withdrawals progressed during the 1970s, Timken-fitted bogies from withdrawn locomotives tended to be re-used on earlier locomotives during overhauls. By August 1979, thirty eight Class 76s remained in service. Declining coal traffic took its toll on the last surviving overhead 1500V DC line, and the Woodhead route's days were numbered. The Woodhead route eventually closed to all traffic on 18th July 1981, and all thirty four remaining Class 76 locomotives were withdrawn. Note: The length over buffers dimension was slightly larger with BR standard drawgear (50ft 6in) or articulated three-link couplings (50ft 7.5in). Weight and axle loadings are for the dual brake (76-bX) variant. All variants weighed between 86 tons 14 cwt and with 87 tons 18cwt with the exception of No. 6000 (89 tons 0cwt). |Motors:||4x||M.V. Type 186| |Total Power:||1hr rating:||1,868 hp| |Tractive Effort:||(starting)||45,000 lb| |Wheel diameter:||Bogie:||4ft 2in| |Length over buffers:||50ft 4in| |Weight:||87 tons 14cwt| |Max. Axle Load:||22 tons 2cwt| No. 26020 (later 76 020) is the only complete EM1 to have been preserved, and is a part of the National Collection. A cab section from No. 76039 (No. 26048 Hector) has been preserved in the Manchester Museum of Science and Industry. MSL Hobbies Ltd produce nickel silver etch kits of the EM1 (Class 76) for 2mm (N gauge), 4mm (OO gauge), and 7mm (O gauge) scales. The 2mm kit requires a modified KATO chassis. DC Kits and Q Kits have both produced 4mm (OO gauge) kits of the EM1, but current availability is unknown. A ready to run OO model of the Class 76 has been produced exlucsively by Heljan for Olivia's Trains. The final twelve EM1 engines received Greek names. The prototype, also received the name Tommy whilst on loan to the Netherlands, and retained its name when it returned. Tommy's nameplate was inscribed "So named by drivers of the Netherlands State Railways to whom this locomotive was loaned 1947-52". Tommy originally bore the LNER number of 6701 and was renumbered in 1946 to No. 6000. All nameplates were removed from the EM1s 1968-70. |BR No.||1971 No.||1976 No.||Build Date||Withdrawal Date||Name| Thanks to Mike Bennett for the "BR Blue Era" photograph of Class 76 No. 76022 at Reddish. Thank you to Mike Morant for the colour photograph of EM1 No. E26036 in BR blue.
Individual differences | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | Intelligibility is for voice communications, the capability of being understood - the quality of language that is comprehensible language or thought. Intelligibility relates to clarity, explicitness, lucidity, comprehensibility, perspicuity, legibility, plain speaking, manifestation, understandability, precision. The degree to which speech can be understood. With specific reference to speech communication system specification and testing, intelligibility denotes the extent to which trained listeners can identify words or phrases that are spoken by trained talkers and transmitted to the listeners via the communication system. Intelligibility and noiseEdit For satisfactory communication the average speech level should exceed that of the noise by 6dB, but lower S/N ratios can be acceptable (Moore, 1997). Manifesting in a wide frequency range, speech is quite resistant to many types of frequency cut-offs and/or masking. Moore reports for example that a band of frequencies from 1000Hz to 2000Hz is sufficient (sentence articulation score of about 90%). Also speech is quite resistant to distortion due to overloaded parts of the transmission as well. Even if not more than 1 or 2% of the wave's peak values is left unaffected by the distortion still scores of 80 to 90% for word articulation are being obtained (Moore, 1997) Other factors influencing intelligibilityEdit Excess reverberation does affect speech intelligibility. Note: Intelligibility does not imply the recognition of a particular voice. |Quantity to be measured||Unit of measurement||Good values| |%ALcons||Articulation loss (popular in USA)||< 10 %| |C50||Clarity index (widespread in Germany)||> 3 dB| |STI (RASTI)||Intelligibility (international known)||> 0.6| - Intelligibility conversion ALcons to STI and vice versa - Speech Quality and Evaluation (a chapter from a Master Thesis) - Moore, C.J. (1997). An introduction to the psychology of hearing. Academic Press. 4th ed. Academic Press. London |This page uses Creative Commons Licensed content from Wikipedia (view authors).|
Hi! This is Yolanda Vanveen on behalf of Expert Village. In this series, we're talking all about bamboo. How to grow it, how to display it, how to dig it up, what to do with it. In this segment, we're going to talk about what to do when your bamboo blooms. Bamboo does actually bloom, and sometimes it's only every 20-80 years. What happens when it blooms is it dies off, and you'll have no bamboo afterwards. It'll kill the whole plant. There's been studies that have been done. You can take a seed off of a bamboo plant halfway around the world and plant it in your garden, and it'll come up and it'll grow beautifully. In 30-40 years later, it'll bloom and die. The same seeds that are grown halfway around the world will do the same thing at the same time. Genetically, it's in the seed. They will have a bloom time eventually. I've read lots of methods on how to save your plants when they do bloom and if they ever bloom, because it doesn't happen very often. The easiest trick that I found is as soon as you see blooms on your bamboo and it's turned into a seedpod, cut the seed pod off, throw it in a hole and cut it into 4 sections and plant it. Many times that will stop the mother plant from dying because it hasn't completely gone to seed and the seed's off the plant. Many times that seed will actually grow as well. Sometimes it works and sometimes it doesn't. Apparently, a lot of people have been able to save their bamboo after it blooms by using this method. Next, we'll talk about how to take of your bamboo in the winter.
Arlene Spencer is just completing research for an historical non-fiction book she is writing about early seventeenth century, English, merchantman Master Richard Williams alias Cornish. She researches history and writes full time from her home in Seattle. For more information about the earliest English settlement fishing and trade stations on Damariscove Island, Maine or Monhegan Island, Maine; early English trade in America; Williams alias Cornish; or her book please follow her on Twitter @pencilnubs. Arlene is also joining the staff here at GlobalMaritimeHistory, so please welcome her! New Evidence: Was Thomas Weston, Seventeenth Century London Merchant among the First to Sail Fish to Virginia’s Starving Colonists? Thomas Weston, early seventeenth century London merchant, was by the end of 1623 among the first to ship fish into the Virginia colony. Record of three colonial Virginia court hearings: the first Virginia records we have involving either Weston or his ship, Sparrow, will be considered. These three hearings came to my attention during research for a book I am currently writing about a contemporary of Thomas Weston’s, Richard Williams alias Cornish. Besides this article, only W.C. Ford, editor for the Massachusetts Historical Society from 1909 to 1929, published anything about the Virginia colony’s court records in which Weston is first recorded. Indeed, a biographer wrote of Weston, that after 1622, “His activities and movements thereafter remain obscure…”1Ford only reprinted two of the three colonial court hearings involving Weston that are referenced in this article. Nothing else has been published about the historical record of Thomas Weston in Virginia.2This article begins to remedy that. In the historical canon Weston is usually not at all associated with Virginia. In 1619, he was a failing business man who was terrible with money, but he offered a small group of English Leyden Separatists, of all things, financing. Afterward, he signed a contract with the separatists promising them financial support and sailing vessels, including a small leaky ship called Mayflower. These separatists were the Pilgrims who in 1620 sailed aboard the Mayflower and founded Plymouth colony, the first successful English colony in New England, and the second English colony in America to thrive after Virginia. Most of what we know about Weston is told to us in William Bradford’s firsthand account, ‘Of Plymouth Plantation’.3 4 One of America’s colonial founding fathers, Bradford was a Pilgrim, a Mayflower passenger, and Governor of the Plymouth colony from 1621 to 1657. Others of Weston’s contemporaries, such as Winslow (Good Newes From New England), Pratt (A Declaration of the English People Th First Inhabited New England), and Morton (Mourt’s Relation) wrote about him but none had the personal dealings with Weston that Bradford had. From these seminal works we only learn about Weston during the years 1619 through 1623. These are the years he was involved in the founding of Plymouth and an attempted second colony, Wessagusset. The history of Wessagusset is not recalled in this article, but is fascinating. Each of Weston’s contemporaries, noted above, wrote about it. Since Bradford, we learned almost nothing new about Weston until two findings were published. The first was a series of legal records from Weston’s later years when he lived in Maryland; and the second was Weston’s ancestry and a series of lawsuits in London that are related to his business dealings during his thirties, all of which were illegal. This second article gives us the first insight we have had into how Weston began his career.5(Coldham 1974, 163-172). He started his career as a merchant in his early thirties in London but ten years later became heavily involved in the business of establishing the first colony in New England, Plymouth (Bradford 1898, 127, 142). Thomas Weston had much loftier goals in mind years before arriving in Virginia with fish to sell. In 1619, the English Scrooby Separatists (by then going on their twelfth year living in Leyden, Holland), first met with a representative of London investors named Thomas Weston, who conveyed his fellow investors’ willingness to finance the Separatists’ colony in the New World. The Separatists sought a new home at Plymouth Colony where they could live and worship according to their beliefs (Bradford 1898, 55). Weston and his fellow investors were interested in Plymouth Colony but more so in New England and her natural resources. In seventeenth century English, “ye” meant “the”, “&c” meant “etc.”, and “yt” meant “that”. Bradford described Weston and London investors’ motivation during the colony’s inception: “Unto which Mr. Weston, and ye cheefe of them, begane to incline it was best for them to goe, as for other reasons, so cheefly for ye hope of present profite to be made by ye fishing that was found in yt countrie..” (Bradford 1898, 55). The investors in the Plymouth colony needed to form a company, request a grant of land, and negotiate rights, such as fishing rights. “The typical instrument of English economic expansion overseas at the end of the sixteenth century was the chartered company. These corporations, endowed by the Crown with certain prescribed monopoly rights and possessing extensive influence in Parliament and at court, were created to serve as the spearheads of expansion in a period when the state itself was too weak or too disinterested to take the initiative.”6 In 1620, after his meeting in Leyden, Weston and other London investors traveled to Plymouth, England where, through his buy-in of £ 500 he joined West Country merchants, (merchants from the West England counties of Dorset, Somerset, Cornwall, and Devon) and together formed the Council for New England. It became the monarch-authorized company that financed, oversaw, and ran Plymouth Colony. These Council merchants were, like Weston, also interested in New England because of her fisheries. As written in the Council for New England’s own records, these merchants’ primary fishing grounds, Newfoundland, had by 1621”…fayled of late yeeres…”, so, a new fishery was then of particular importance to them.7 8 Though they had heard rumors from mariners of great runs of New England fish, the English in 1620 did not know much about the fisheries off the northeast coast of America. “Not only are there no records regarding correspondence, provisioning, or any other aspect of a New England fishery prior to the first decades of the 17th century, the glorious abundance that New England fishing offered came as a great surprise to the first European explorers. Furthermore, it took a while before the earliest entrepreneurs realized what time of year was best for fishing. …the first English explorers and chroniclers claimed the best fishing season was from March through May and it was not discovered until later in the 1620’s and 1630’s that the fish were biting best during the raw months of January, February, and March.”9 It got the attention of the aristocracy and the powerful in London that the merchant class of the West Country was expanding its market. “English fishing-ships based on West Country ports carried their catch directly from Newfoundland to Spain and brought back specie [sic. coin form of currency], wines, salt and other products to England. With this development the stage was set for prolonged conflict between the West country ports, anxious to develop an expanding dry fishery, and the London chartered companies which sought to organize the trade along monopolistic lines.” (Easterbrook and Aitken, 30). That Weston invested his own money in a business as an individual along with other investors who together owned a company through joint stock was not new. What was relatively new was this: as individuals, those investors were not necessarily wealthy, courtiers, nor aristocracy anymore. This was a recent emerging economic engine in England: the first of what eventually becomes the middle class. The practice came about because then, in recent prior decades, English merchants had learned from their more affluent and successful Dutch competitors to, as an individual trader, no longer trade solely in one good (i.e. wool cloth, coal, or fish), but to instead diversify trade goods, buying different kinds of goods and then selling those goods by, crucially, creating and maintaining trade routes through relationships with trade partners within Britain and internationally.10 This new middle class or merchant class placed fishing or trade stations on the eastern American coast, tiny though they often were, they were managed and operated by a factor (overseer of trade) who was responsible to their merchant employer back home in England for all trade or truck (barter) transacted by the station. These rudimentary English outposts, though not great in number, were the precursor of the English settlement of New England, particularly existing before the arrival of the Pilgrims. Some were simply rough individual homes built by men who created lasting relationships with neighbor tribes through trade. Others lived in rudimentary stations fishing and processing their catch into dry or salted fish ready for sale after long voyages to markets in America, France, Spain, and Britain. 11 “Early in 1622 Weston, in spite of his large promises, had abandoned the Plymouth settlement…. His finances were failing, and he sought to recoup himself by colonizing on his own account, perhaps by more devious methods. In January he and John Beauchamp sent over the little Sparrow, a ship of thirty tons, to fish and trade. The Sparrow went on to the fishing grounds, but her master sent a boat to Plymouth bearing six or seven men, the advance guard of Weston’s colony of “rude fellows” [sic. Wessagusset].” (Foster 1920, 169). The Julian calendar was used in England during the seventeenth century. The first day of the New Year was not January 1 but the 24th of March. Today, to make it clear to the reader whether the historical date is Old Style (O.S.) or New Style (N.S.) we write Julian dates into a specific format arranged: ‘older O.S. date’, ‘back slash’, modern, N.S., or Gregorian calendar year. For the purposes of this article new style dates are used unless otherwise noted.12 The colonial Virginia court records, written by scriveners or scribes, were written in Secretary , the writing used by the British during the early modern period. Secretary contains many letters we recognize today, but includes holdouts from the medieval era: individual symbols for prefixes and contractions, standardized abbreviations, and in some cases wholly different letters from what we know. It may be helpful to keep this in mind.13 In a portion of a letter Thomas Weston wrote from London, dated January 12, 1621/2, to Governor Bradford at Plymouth Colony, he described a new partnership and the plan for Wessagusset and the Sparrow: “Beachamp” was John Beauchamp, another London based investor in the Council for New England. Although by this time Weston had quit the Council for New England Beauchamp had not, and so they evidently parted ways. In part of a subsequent letter to Bradford, dated April 10, 1621/2, Weston explained his new business further: After Weston’s “rude fellows’” voyage aboard the Sparrow from London to Damariscove Island (Maine), sailing just under 2,800 nautical miles, (hereafter, nmi.), one of Weston’s contemporaries described that upon their arrival in the New World, “When for instance, the forerunners of Weston’s colony at Wessagusset reached the Damariscove Islands, in the spring of 1622, the first thing they saw was a May-pole, which the men belonging to the ships there had newly set up, “and weare very mery.”14That the fishermen there before Weston’s men arrived had erected a May-pole, used during festive seasonal celebrations and encouraged by King James, but not considered Christian behavior by the Puritans, demonstrates the fundamental differences between sailors or fishermen and the Puritans that made for fraught relations between Plymouth and her rougher less disciplined “rude” English counterparts on nearby coastline and islands, including Weston’s men. Weston’s attempted second colony, Wessagusset, was meant to be a trade and fishing station. It was located in Massachusetts “nearly opposite the mouth of the Quincy River, and a little if anything north of it… upon the north side of the cove, or indenture of the shore, opposite the mouth of the Quincy River. This cove is unmistakably that now called King’s Cove, formerly known as Hunt’s Hill Cove.”15It is worth noting that others place the historical site of Wessagusset at present day Weymouth. Of Wessagussett, “In 1635 the settlement’s name was changed to Weymouth. While the town was incorporated in 1635, town records began in 1641… The first land division occurred in 1636.”16 To be clear, when she first landed at Damariscove Island, Weston was not yet aboard the Sparrow. He arrived in America for the first time, later in 1622 (Coldham 1974, 167). Of her first American sailing, Bradford wrote the number of passengers the Sparrow carried from Damariscove Island to Plymouth was “A small party, of seven “passengers” but Pratt recalled about ten men, total (Bradford 1898, 137) 17 Sparrow’s 30 tons may seem small by today’s standards too small to cross the Atlantic, but “As late as 1600, the average size of English ships involved in London’s foreign trade was only about 80 tons.”18In a partial national survey taken in 1560, seventy-six ships were included each weighing 100 tons or more. The 1577 survey counted one hundred thirty-five vessels in the 100 tons and upwards range, but six hundred fifty-six ships of between 40 and 100 tons range. But the later survey, the 1582 survey, was the most comprehensive. That year – thirty years before Sparrow’s trans-Atlantic voyage, ships at the 10-80 tons range totaled one thousand two hundred-four ships or 82.8%…” of total ships counted in Britain in 1582 (Friel 2009). It appears from this data that compared to larger ships, at least thirty years before Sparrow sailed the Atlantic, the use of smaller vessels was becoming more common. After she sailed Weston’s ”rude fellows” from Damariscove Island in 1622 to Bradford’s Plymouth Colony (about 118 nmi.) it is not certain what happened to the Sparrow in the months thereafter. It is most probable that she wound up being used by the Plymouth Pilgrims as a fishing vessel. We do know that she is eventually returned to Weston. Perhaps the first record of her after Weston’s arrival, Bradford may have described the Sparrow when he wrote that in 1623: Those who knew Weston noticed in him after this near death experience an end of “his former flourishing condition”. That Weston finally met some of the very dangers in which he placed Plymouth and Wessagusset colonists in may have had a lasting impact. Importantly, Bradford does tell us that Weston recovered “his small ship”.(Bradford 1898, 179). Among the final words Bradford wrote about either Weston or the Sparrow he gives a fascinating description of an event that nearly sank her (a probable second time), through which we learn some details about her design: To be clear, Weston did not pay his ship’s company but put them on shares, meaning they would divvy up any earnings the voyage made. The ship’s company considered turning pirate but in the end was talked out of it and was very generously given wages by Plymouth Colony. They then sailed the Sparrow to trade with the Narragansett people. It was probably their first voyage to trade with them, because they were not aware of what specifically the Narragansett sought in trade, and so Weston and his ship’s company did not make money on that voyage. It became worse when in a storm they avoided losing the Sparrow by cutting her mast and tackling. Fortunately, they succeeded and they did save her from being driven onto her anchors. It was these harrowing experiences that brought Weston and the Sparrow to Virginia. In 1623, the last Bradford ever wrote about Weston and his “little ship” was: Weston and his ship’s company had agreed to sail for Virginia “after they had been to the eastward”, a reference to English deep sea fishing stations to the east of New England in the Atlantic, which included Damariscove Island. It was at one of these stations, Weston and his “rude fellows” either fished, traded for, or purchased a cargo of fish before sailing the Sparrow to Virginia. Around this time, in Virginia, the colonists there were still reeling and recovering from the Great Indian Massacre not quite a year before, on March 22, 1622 N.S. It was the first time there was any violent uprising against Jamestown and it literally caught the colonists off guard. Believing they were on friendly terms with their neighbors, the Powhatan Indians, the Virginia colonists were suddenly and violently set upon inside the colony fortifications. One quarter of the colonists were killed. After the attack, Virginia’s settlers repeatedly conducted military raids against the Powhatans. They were still being conducted into the end of summer 1623 when peace was agreed upon on both sides. It was probably about five months after the last of the retaliatory August raids that Weston first arrived in Virginia.19 20 This citation, of the Minutes of the Council and General Court (hereafter, MCGC) is a part or what remains today of the Virginia colony’s original court records (which today begins in 1622/3, though the colony was founded in 1607). It is an incomplete record. Thomas Jefferson, who rescued the pages, then scattered, pulled back together what he could and saved the collection for posterity. He attempted to place them back into order by date (the first time anyone had done so in one hundred-fifty years). Not until the turn of the twentieth century when it was transcribed from Secretary, by McIlwaine, was it easily accessed. First, it was published as a nearly monthly serial, in issues from April 1911 through October 1923 in the MCGC, The Virginia Magazine of History and Biography (hereafter, VMHB) published by the Virginia Historical Society; and then years later as a book, MCGC of Colonial Virginia.21 It was amid this strife that the first colonial Virginia court record involving Weston or the Sparrow occurs: Howbeck’s testimony, here, refers to “Canada” which not only did not yet exist as a nation, the word had then only recently been recorded for the first time.24It is in Lescarbot’s Histoire de la Nouvelle France (1612) that the word Canada is first used. He described only a region of the St. Lawrence River as being Canada.25 The timing of Lescarbot’s first use of ‘Canada’ dovetails with this record of the use of the word. Too, one of Weston’s sailors gave us a clue. As it is recorded, in his testimony he clarified what the name ‘Canada’ referred to, “…at Dambrells Cove in Canada,…”(McIlwaine 1916, Vol. 24 (4): 343). ]For the purpose of this article, according to these testimonies, I take it a priori that amongst English fishermen, like Weston’s company of seamen and traders, the name, ‘Canada’, during the 1620’s, referred to a geographic region that included Damariscove Island. Today it is located in Boothbay Harbor, Maine, approximately 108 nmi. from Wessagusset. The island has been known by many similar names: Damoralls Cove, Dambrels Cove Island, Dermer’s Island, and others. This initial court record ends with an incomplete phrase, “It is ord…”. It probably would have read, ‘It is ordered…’ noted when the court gave orders, but there is no record of any orders related to this hearing in the MCGC (McIlwaine 1924, 10). Besides the Massacre there were other reasons the colonists of Virginia were starving by 1623. Food provisions from England in resupply shipments frequently arrived spoiling and inedible. As well, new colonists often arrived with little to no food provisions for themselves for after their voyage and so required food from depleted stores set aside by the starving colonists.26 Though today we may observe that they could have foraged, raised food, or fished, the colonists of Virginia were a mix of English skilled workers. They were not mostly yeoman who knew how to farm for large numbers of people, nor could they forage for hundreds of settlers. While the colonists had traded with Indians for corn and other food goods, trade had stopped since the Massacre. Though some households kept a kitchen garden, these probably varied in their ability to feed the entire household throughout the year. Few had the resources or time to make butter, cheese, soap, or candles. So, at this time most of these goods were still being imported from England. Food staples were, too. Their situation also had a lot to do with provisioning. Even by 1624 there were hardly any cattle in the colony, yet. That some colonists could have fished for themselves is certain but most would not have had the time. The entire colony was tending full time to tobacco, the only staple crop being raised in large quantities by every Virginia colonist, intended to earn their colony’s investors a profit and the pressure was on. At that time the Virginia investors still had yet to see any profit. Besides, the number of fish needed to feed the colony required skilled labor to catch and then process the fish so it could be stored. Deep sea fishing and processing was not a specialty common amongst these colonists who did not in Virginia, have readily available the ships necessary to be able to fish at sea (Kingsbury 1935 Vol. II, 121, 231). It is perhaps telling that in the MCGC, as it exists today, this first court hearing involving Weston and the Sparrow happens to also be the first in which the commodity, ‘fish’, or the word, ‘Canada’ was used. Wisely, Weston shipped fish into the Virginia colony at a crucial time. We do know that at least five months before Weston’s arrival; other vessels brought cargoes of fish for the Virginia colonists, probably the first to do so. A vessel vaguely described as “a barque”, made two runs from Canada (with fish); the Furtherance arrived “with above 40,000 of that fish which is little inferior to Lyng, for the supply of the colony”; the Samuel; the Ambrose; and “other ships” arrived in Virginia in the summer of 1623. These were ships sent by either the Virginia Company or English merchants.27It appears that no cargoes of fish sailed to Virginia, for the benefit of the colonists, before the summer of 1623. Small personal stores of dried fish were probably sailed from England, Canada, Newfoundland, and European ports as individual deliveries to colonists in Virginia, before this time, but none intended as a colony supply. In other words, Weston was probably among the first merchants to bring a cargo of fish to sell to the starving Virginia colony, and perhaps, too, among the first to bring fish from “Canada”. The, “other ships” that arrived in Virginia in summer 1623, may have been a reference to the John and Francis, the Adam, and the Tiger each expected with fish from Canada. In late 1623 O.S. when Weston arrived, the colony’s governor had set prices for certain commodities because, they being scarce, the colonists buying the fish were facing price gouging. For fish from “Newfound-Land” per Hundredweight the price in the Virginia colony was s. 15 in ready money or £ 1 s. 4 in Tobacco which was also used as currency in the Virginia colony. Fish from Canada per Hundredweight was £ 2 in ready money or £ 3 s. 10 in Tobacco (Kingsbury 1935, Vol. IV, 271-273). Since, in the Virginia court record we are not told the amount of fish that was aboard the Sparrow, we cannot estimate the value of Weston’s cargo, but the fixed pricing protected customers while creating a market for Weston and other merchants. This fixed pricing included fish from Canada indicating that at that time it was a common commodity. The official warrant or commission that Weston apparently did not have before arriving in Virginia with fish to sell to colonists had been issued properly in England, prior to their departures, at different times, to at least four other merchants. These records are important when considering whether Weston was truly among the first merchants to give the Virginia colonists succor, as these are the earliest re-supply commissions specifically for fish for Virginia. Three of the entries are commissions granted the right to fish in New England ‘for the relief of the Virginia colony’. The fourth may or may not have been. The first, in July 1612 was given to a ship sent to fish for the colony but returned to England where sailors filled “the town with ill reports, which will hinder that business…” and never got any fish to Virginia. The second commission, issued in October 1618, was “38. Project of the intended voyage to Virginia by Capt. Andrews and Jacob Braems, merchant, in the Silver Falcon… to fish upon the coast of Canada”.28”But the Silver Falcon never reached Virginia. ‘Near the Bermudas she met with a frigate of the West Indies and had trucke with her.’ exchanging their goods for ‘upwards of 20,000lb weight’ of tobacco.” (Note, too, that the Silver Falcon was only of 10 more tons than Sparrow). Braems returned home to England where some of his voyage’s investors claimed of the Silver Falcon, “she returned ‘richly laden with tobacco, plate, pearls and other rich goods worth some £40,000. They were sure they had been cheated of a huge profit from these glamorous goods…”.29Again, fish never arrived in Virginia. The third entry is November 8, “Commission for Arthur Champernoun, for setting out the Chudley to fish in New England this year.” (Sainsbury 1860, 34). ”In November, 1622, Arthur Champernowne had a commission from the Council for New England permitting his vessel, the Chudleigh, an ancestral name, to trade and fish in the waters of New England. This vessel did not sail, it is likely, before the following spring…”which would be 1623 O.S.30So if Champernowne departed in the spring the Chudley may have been the first to sail fish into Virginia. Champernowne’s sailing may predate everyone’s. Interestingly, a December 19 entry reiterates Champernowne’s commission but crucially adds, “Capt. Squibb to have a similar commission for the John and Francis.” which, as recounted above, arrived in Virginia and then was set out again for Canada to return to with fish. If Champernowne’s commission was indeed “similar” to that of Squibbs, he, too, may have shipped fish from Canada to the relief of the Virginia colonists in early 1623 O.S. I find no record of this. There is a fourth entry pertaining to fishing rights in New England before summer 1623 but importantly pertains to Mr. Thomsen who was set out by the Council for New England in their financial interests only and not to relieve the Virginia Colony (Sainsbury 1860, 34). It appears that the attempts to bring fish “to the relief of Virginia” may have begun in early 1623 O.S. but had at least begun, in small numbers, by late 1623 O.S. Returning to the first court record, two of the men listed as “present” under the record’s date were Councilmen, leaders of the Virginia Colony that judged and ruled on court proceedings. The third man, Dr. Pott was a freeman of James Cittie. There were two other hearings that day and all three happened to be Admiralty hearings, court hearings specific to the business of sailing (McIlwaine 1911, Vol. XIX (2): 141-142). In the VMHB McIlwaine transcribed the court record as saying, ‘John Howbeck, aged 35, swore that the Sparrow was Mr. Weston’s and that Weston bought “Becham” out of the ship’. As already stated, McIlwaine adjusted at least one word, “fish”, between his two published transcriptions. Indeed another word in this record changed between the first published transcription and the second. In the second, McIlwaine spelled “Becham” as “Becgam”. The letter ‘g’, may have been transcribed in place of the ‘h,’ in the second transcription, because our modern day letter ‘g’ looks like the Secretary letter ‘h’. McCartney, in her book of biographies of early Virginia colonists follows McIlwaine, by publishing an entry and profile for “Becham” (and “Becgam”) for which the entirety of the biography is taken from this one court hearing. The original handwritten court record online reads as “Bocham” it sounding phonetically like, ‘Beaucham’ or Beauchamp without the ‘p’ sound. (McIlwaine 1911, Vol. XIX (2): 141-142) (McIlwaine 1924, 10)31McCartney makes no mention of Beauchamp.32 Whatever the scrivener may have written, Ford had thought the same thing about this surname (Foster 1920, 170). In a footnote Ford wrote, “Weston is said to have bought Beauchamp out of the Sparrow “before she came from Plymouth.” One Maunder, the purser, laid claim to some interest in her…” Ford did not cite this quote but it appears to be the record of the January 9, 1623/4 Virginia hearing he quoted (Foster 1920, 170). It is helpful to recall that Weston wrote Bradford April 10, 1621/2 about the ship, and included her name, Sparrow (Foster 1920, 145-147). Altogether, the case can be made that it was probably Beauchamp, Weston’s former business partner, discussed in this court hearing and not someone named Becham. In fact, as already stated, Beauchamp was a Plymouth Colony investor who remained invested in Plymouth after Weston left the Council. This might in part at least explain why Weston said in court that he bought the Sparrow and her cargo from him, whatever cargo was sold with the ship. That it might indeed have been Beauchamp mentioned in this first case is important because it happens that Thomas Weston in fact had not bought the Sparrow, but went into debt to John Beauchamp for her. That debt was still not paid by 1641. “On October 1641 Abraham Halsey of London, gentleman, aged 56, deposed in London that he was a witness to a deed of 29 March 1622 made between Thomas Weston of London, merchant, and John Bewchampe, whereby Weston contracted a debt of £ 486. John Bewchampe, citizen and salter of London, aged 49, deposed at the same time that the debt was still outstanding except for £ 50 “in Cadize and Crewell Ribboning.” Bewchampe had sent his power of attorney to Anthony Jones, merchant and planter in Virginia, “to aske, demand, sue for recovery and residue of the said Thomas Weston.” (Coldham 1974, 167). Weston moved to Maryland in 1640. That Weston moved when he did is interesting timing because a suit was probably filed on Beauchamp’s behalf, in Virginia, apparently in 1640 or 1641, but I have found no record of it. It is helpful, here, to remember the two 1621/2 letters Weston wrote Bradford and that he wrote them just after he had signed the deed with Beauchamp for the Sparrow, in London, as witnessed by Halsey. The Sparrow was not paid for, though. So, the testimony, as recorded in the initial Virginia hearing, on January 9, 1623/4, was apparently not the truth and the witness probably perjured himself. Too, it is likely that when Weston wrote Bradford that Beauchamp and he went into a partnership, it was not true. Perhaps Beauchamp had simply sold Weston the ship, for a promissory note, and no partnership had ever existed. True or not, to do so might have been strategic. Beauchamp, always remained an investor in the Plymouth Colony even after Weston left the venture, so Weston might have described a partnership with Beauchamp, someone Bradford would have known and trusted, as a means to ensure Weston could expect good relations with Bradford upon his arrival in America. The second colonial Virginia court hearing occurs two years after Weston’s initial arrival in Virginia, on February 20th, 1625/6 and also involved a witness (this time Thomas Ramshee, a successful planter in the Virginia colony) who also swore and testified that Weston owned the Sparrow. Once more, a witness for Weston apparently perjured himself. Ramshee also testified that Maunder had, on the initial voyage to America, been her purser but was too poor to properly outfit himself for the voyage and so borrowed money from Weston. At the time of this hearing Maunder was no longer in the Colony but sent notice to Weston that a discrepancy existed in debts between he and Weston. The court did issue an order this time requesting that Mr. John Baynam (likely Weston’s factor in Virginia) bring Weston’s accounting to Weston, after which Weston was supposed to bring the accounting to the court, including goods and debts Baynam received from Maunder. It is not recorded in the MCGC that Baynam delivered Weston’s accounting to him or whether Weston delivered his accounting to the court, as ordered, but from the last of these three colonial Virginia court hearing records we know that indeed he did (McCartney 2007, 589) (McIlwaine 1924, 96). A year later in court, in James Cittie, on January 11, 1626/7, Thomas Weston, “Merchant”, filed a complaint, which is the last colonial Virginia court record in which the Sparrow is mentioned. He alleged that Bainham (Baynam) had paid James Carter (mentioned in the original hearing) seventy-four pounds of tobacco, which was rightfully Edward Maunder’s (also mentioned in the first case), who was, at this time, in England. Carter was described as master of the Anne (another merchant vessel) who had recently died. Baynam was ordered to re-pay Weston seventy-four pounds of tobacco he (Baynam) had paid to Maunder because the court in its February 20, 1625/6 hearing had, this record says, ordered Maunder to pay Weston and had not given warrant for Baynam to pay Carter. Baynam said that he would pay Weston. It sounds like, in the end, Baynam paid twice. It is interesting to note that in the time in between the first of these three suits, and this one, the cargo in question was fish and tobacco is here being discussed. (McIlwaine 1919, Vol. 27 (1): 39) (McIlwaine 1919, Vol. 27 (2): 140). Tobacco was a form of currency in Virginia. The record does not tell us that payment was made, and we do not read of the Sparrow in court again, so it is likely that it was. When Weston ran or moved to Maryland he might have taken the Sparrow with him. What is known is that he arrived in Maryland, in 1640, with five people (apparently headrights – passengers that allowed him to obtain land in the Maryland colony in exchange for the expense Weston was to have paid sailing them from England to the New World (if indeed he had)). “At the meeting of the Maryland Assembly, September 5th, 1642, “Mr. Thomas Weston being called pleaded he was no freeman because he had no land nor certain dwelling here &ca. but being put to the question it was voted that he was a Freeman and as such bound to his appearance by himself or proxy whereupon he took place in the house.” of St. George’s Hundred . After, he was made a member of the Maryland Assembly, and obtained a grant the following year for twelve hundred acres, on which he built a manor naming it Westbury Manor. Officially Westbury Manor was eventually deemed a safe haven for neighbor women and children in the event of an Indian attack (Johnston 1896, 201). “In Bristol [sic. England] on 20 May 1644 William Palmer of that City, sailor, aged 24, deposed that he was one of the company of the barque John of Maryland of which Thomas Weston was the Master. When they had left Virginia at the end of June 1643…” attempting to avoid the violence of the Civil War at home, they intended to land in Ireland but the winds did not allow it, and instead they landed in Cornwall, England in September 1643. “Their 25-ton ship was fully laden with tobacco when it left Virginia but, when the hatches were opened by… the company, it was found to be spoiled because of the leakiness of the vessel during the voyage. The deponent knew that Thomas Weston had not been to London or received any goods from there for five years; he could swear to this because he had been with Weston from November 1638 until they arrived at Padstow[sic. Cornwall].”(Coldham 1974, 167-168). Weston’s death is recorded as a matter of inheritance. In December 1674 a Richard Norman came before William Hathorn a Massachusetts official and swore, “…that Thomas Weston that used formerly to trade in Virginia and soe to New England and afterwards went home for Bristoll and there dyed as by credible and common report.” After which Weston’s only child, Elizabeth Weston, inherited her father’s land. Elizabeth was raised by a Moses Maverick in Marblehead, Massachusetts. There she married Roger Connant. It is not clear why Weston left a young Elizabeth in Marblehead or with Maverick to raise her (Johnston 1896, 201-203). Neither Weston’s will nor his grave has been located (Coldham 1974, 168). Thomas Weston remains an important party in the history of the English colonization of America certainly but, too, he was an example of the first of an emerging middle class evidenced in the first Virginia colonial court hearing in which he was involved. From twice attempting colonization he simply sailed a cargo of fish to Virginia assisting in the colony’s survival, perhaps among the first to assist, thus starting his career as a colony-based merchant. His story is not yet complete. That yet unknown information about him probably exists in archives or repositories is enticing. For example, record of the departures or arrivals of any of Weston’s vessels, including the Sparrow, and perhaps their cargoes, might still exist. Weston, though he did not always operate within the law or responsibly, eventually achieved legitimacy and success as a merchant. That towards his later years he obtained an official position of civic leadership in Maryland and was granted land demonstrates how capable the merchant class, had become, and how legitimate this merchant, in particular, had become. “His little ship” was a means to that success, perhaps his only means as a shipping merchant, at least for a while. Although we do not know what became of the Sparrow, we at least understand how merchant vessels, including small ones, were critical to the survival of the first successful English colonists and the colonization of America. Perhaps eventually through others’ research we will learn what became of the Sparrow, Weston’s “little ship”. - Coldham, Peter Wilson. 1974. Thomas Weston, Ironmonger of London and America, 1609 – 1647. National Genealogical Society Quarterly. Volume 62 (3): 167. - Foster, Francis Althorp. January Meeting, 1921. Pickering vs. Weston. Proceedings of the Massachusetts Historical Society 54 (1920): 229-232. - Bradford, William. Bradford’s History of ‘Plimoth Plantation’. 1898. Boston. http://www.gutenberg.org/files/24950/24950-h/24950-h.htm - See Johnson’s ‘Of Plymouth Plant…’ in which Bradford’s manuscript and letters are arranged into chronological order in which the full text of the Pilgrims’ journals, “A Relation or journal of…” (London 1622) is included. Bradford, William. 2006. Of Plymouth Plantation, edited by Caleb Johnson. Xlibris.com - Johnston, Christopher. 1896. Thomas Weston and His Family. The New England Historical and Genealogical Register. Vol. 50: 201-206. - Easterbrook, W.T. and Hugh G.J. Aitken. 1956 (2002). Canadian Economic History. Toronto. University of Toronto Press, 31. - Christy, Miller, Esq. 1899. Attempts Toward Colonization: The Council for New England and the Merchant Ventures of Bristol, 1621 – 1623. The American Historical Review. Vol. 4 ( 4): 693. - Mathews, K. 1968. “A History of the West of England-Newfoundland fishery.” PhD diss., University of Oxford, 3. - Harrington, Faith. 1985. “Sea Tenure in Seventeenth Century New England: Native Americans and Englishmen in the Sphere of Marine Resources.” PhD diss., University of California, Berkley, 44. - Pope, Peter. 1996. Adventures in the Sack Trade: London Merchants in the Canada and Newfoundland Trades, 1627-1648. The Northern Mariner/Le Marin du nord. Vol. VI (1): 2–4. - Faulkner, Alaric. 1986. Followup Notes on the 17th Century Cod Fishery at Damariscove Island, Maine. Historical Archaeology, Journal of the Society of Historical Archaeology. Volume 20 (2): 86. - The National Archives. “Palaeography Quick reference.” http://nationalarchives.gov.uk/palaeography/quick_reference.htm Accessed March 24, 2017. - Hill, Ronald A. “Interpreting the Symbols and Abbreviations in Seventeenth Century English and American Documents.” 2013. Idaho. https://bcgcertification.org/wp-content/uploads/2013/05/Hill-W141.pdf Accessed January 15, 2017. - Morton. 1883. The New English Canaan of Thomas Morton, edited by Charles Francis Adams, Jr. Boston. The Prince Society, 17. - Winthrop, R.C., Jr. 1891. November Meeting, Site of the Wessagusset Settlement. Proceedings of the Massachusetts Historical Society. Second Series. Vol. 7 (27): 27. - Chartier, Craig S. 2011. “An Investigation into Weston’s Colony at Wessagusset.”, 13. www.plymoutharch.com/wp-content/uploads/2014/11/50300822-An-Investigation-into-Weston-s-Colony-at-Wessagussett-Weymouth-Massachusetts.pdf. - Pratt, Phineas. 1858. A Declaration of the English People Th First Inhabited New England, edited by, Richard Frothingham, Jr., New England Historic Genealogical Society. Boston, 7. - Friel, Ian. 2009. “Elizabethan Merchant Ships and Shipbuilding.” Presentation. Gresham College. http://www.gresham.ac.uk/lectures-and-events/elizabethan-merchant-ships-and-shipbuilding - Minutes of the Council and General Court, 1622-1624 edited by, H.R. McIlwaine. 1911. The Virginia Magazine of History and Biography. Virginia Historical Society. Vol. 19 (2): 115, 147. - Documents of Sir Francis Wyatt, Governor 1621 – 1626. 1927. The William and Mary Quarterly. Vol. 7 (3): 211–212. - See McIlwaine’s Prefatory Note explaining in what form the original Virginia colony’s MCGC records exist today. (McIlwaine 1911, Vol. XIX (2): 113-123). - MCGC of Colonial Virginia edited by, H.R. Mcilwaine. 1924. Richmond. Virginia State Library, 10. - McIlwaine published transcriptions the MCGC twice. It is interesting to note that in the first published transcription, the VMHB version, the word “fish”, was transcribed as “lists” (McIlwaine 1911, Vol. XIX (2): 141-142). In the original document it does appear to read, ‘fish’. Image 5 of Virginia General Court 1622-29, Hearings, With Minutes, Manuscript/Mixed Material, Library of Congress, https://www.loc.gov/resource/mtj8.064_0002_0573/?sp=5 Accessed January 3, 2019. - Though not exhaustive, for other MCGC records in which fishermen or traders referred to, ‘Canada’, see: (McIlwaine 1911, Vol. XIX (2): 142); (McIlwaine 1911, Vol. XIX (3): 227); (McIlwaine 1911, Vol. XIX (4): 382); (McIlwaine 1911, Vol. XX (1): 37); etc. - Elliott, A. Marshall. Origin of the Name ‘Canada’. 1888. Modern Language Notes. Vol. 3 (6): 165. It is interesting to note that during Elliott’s important research into the recording of the first uses of the place name he did not include a review of the MCGC records which repeatedly includes instances of early seventeenth century English sailors’ and merchants’ use of the name, especially given the relatively close geographic proximity of the St. Lawrence River Valley to English trade and fishing stations, like Damariscove Island. - Kingsbury, Susan Myra. Records of the Virginia Company of London, Vol. II. 1935. Washington D.C. United States Government Printing Office, 174–182, 449, 496. - Brown, Alexander. The First Republic in America. 1898. Cambridge. The Riverside Press, 516. - Sainsbury, W. Noël. 1860. Calendar of State Papers, Colonial Series, 1574– 1660. London. Longman, Green, Longman, & Roberts, 13, 19. - Kaufmann, Miranda. Black Tudors. No date. OneWorld Publications. No page numbers. - Tuttle, Charles Wesley. Capt. Francis Champernowne. 1889. Boston. J. Wilson & Son, 75. - Image 5 of Virginia General Court 1622-29, Hearings, With Minutes, Manuscript/Mixed Material, Library of Congress, https://www.loc.gov/resource/mtj8.064_0002_0573/?sp=5 Accessed January 3, 2019. - McCartney, Martha W. Virginia Immigrants and Adventurers 1607-1635. 2007. Second Edition. Baltimore. Genealogical Publishing Company, 123, 403.
1.Which of the following statements about the sampling distribution of the sample mean is incorrect? a. The sampling distribution of the sample mean is approximately normal whenever the sample size is sufficiently large (n>30). b. The sampling distribution of the sample mean is generated by repeatedly taking samples of size n and computing the sample means. c. The mean of the sampling distribution of the sample mean is equal to µ (population mean) d. The standard deviation of the sampling distribution of the sample mean is equal to (sigma) population standard deviation;. 2. The standard error of the proportion will become larger as: a. p approaches 0 b. p approaches 0.5 c. p approaches 1.00 d. n increases 3. The t distribution a. assumes the population is normally distributed. b. approaches the normal distribution as the sample size increases. c. has more area in the tails than does the normal distribution. d. All of the above. 4. Assume a 95% confidence interval for µ turns out to be (1000, 2100). To make more useful inferences from the data, the researcher wants to reduce the width of the confidence interval. Which of the following will result in a reduced confidence interval width? a. Increase the sample size. b. Decrease the confidence level. c. Increase the sample size and decrease the confidence level. d. Increase the confidence level and decrease the sample size. 5. In the construction of confidence intervals, if all other quantities are unchanged, an increase in the sample size will lead to a ___________ interval. c. less significant 1.d. The standard deviation of the sampling distribution of the sample mean is standard deviation divided by root of sample size. 2. b. p approaches ... This post answer five conceptual questions on sampling, hypothesis testing and confidence interval
Native Habitats for Native Species Public land stewardship programs are very different now than they were 50 years ago. When we had a habitat problem during the early days of conservation, we tended to search for a magical solution. We might, for example, introduce a non-native plant, thinking it would meet the needs of all wild things, or at least the species being managed at the time. Such attempts rarely lived up to expectations, and sometimes caused new problems. It’s a fact that most of our early habitat management decisions on public land usually benefited game animals. That’s largely because game restoration was crucial at that time and, unlike today, hunters and anglers paid virtually all the conservation bills. Game animals remain important in our habitat management programs. However, they share the limelight with a myriad of other plants, animals and natural communities. The natural resource field has changed with the times and learned from our past efforts. Today, our prescription for most habitat problems is to restore natural communities. Through time and experience we’ve learned that the native plant and animal communities that historically occurred here are usually best at supporting all of Missouri’s native wildlife, including healthy populations of game animals. Native plants and animals have adapted to one another over thousands of years and respond to each other in ways that ensure their mutual survival. Our effort to re-establish native plant and animal communities is evident at many of the state’s conservation areas. When you visit these areas you may see a prairie, savanna or wetland restoration in progress, or one that has been recently restored. These natural communities not only benefit bobwhite quail, white-tailed deer and wild turkey, they also support a rich diversity of species relished by birders, botanists, photographers and naturalists. We still conduct agricultural operations at some conservation areas. Haying is allowed when it benefits specific native plants, plant communities or wildlife habitat. Cropping helps us manage plant succession and is an important early step in restoring native grasses. In some cases, cropping provides high energy food for migrating waterfowl. You may see cattle grazing at a few areas. These cattle replace bison, whose grazing formerly helped maintain prairie habitat. We rely on controlled grazing to modify grassland structure in ways that benefit declining grassland birds, including bobwhite quail. We also use prescribed burning on thousands of acres to mimic the periodic natural fires that formerly contributed to the richness and integrity of our native grasslands. Early Europeans visiting what would become Missouri described a rich world of wild places and native things. From their accounts, we know of the spectacular pine forests and bluestem savannas of the Ozarks, the dark swamps of the Bootheel and the expansive tallgrass prairie of northern and western Missouri. Our conservation areas provide wonderful opportunities to partly restore this “rich world of wild places and native things.” We can manage these areas to re-establish habitats for native species and to protect unique natural communities. As an added benefit, restoring and managing native plant communities results in a wider range of ways that the public can benefit from these areas. We never lose sight of the fact that conservation areas belong to the people of Missouri and are for their benefit. Conservation areas have always provided room for people to enjoy the outdoors. Now they are doing so much more. Why not visit a Missouri conservation area to see what’s developing on your lands? Dave Erickson, Wildlife Division Administrator
The concept of a high-speed railway between the Indian Ocean and Alaska in the context of the Belt and Road Initiative This paper reviews the antecedents and future potentials of a transcontinental high-speed railway from the Indian Ocean to the Bering Strait and Alaska. The original ideas date from the 19th century, but have regained relevance in the context of China’s Belt and Road Initiative. Apart from increasing connectivity between Eurasia and America and developing economic complementarities between different parts of the world, a high-speed railway would open prospects for exploiting the agricultural and tourism potential of Siberia, mitigating economic and environmental risks in different regions, and raising standards of living. However, like a century ago, key barriers to the project remain a complicated physical geography, low population density and continuing geopolitical tensions.
Shipping containers now dominate the transport industry when it comes to moving nonbulk items. Everything from cars to toothpicks gets packaged in these ubiquitous metal containers, then trucked and shipped all over the world. A company has put a new face on the containers, making them out of corrosion-resistant composites. The resulting containers are easier to clean, weigh 25% less, are simpler to repair, and manufacturing them generates 25% less carbon dioxide compared to the metal alternative. To top it off, the new Cargoshell container being developed by Cargoshell BV, Rotterdam, The Netherlands (www.cargoshell.com), can be collapsed, thanks to hinged sides. One person with no special tools or powered equipment can break down a Cargoshell, which then shrinks to a quarter of its expanded size for easier transport when empty. Other advantages include better insulation, making the container easier to refrigerate. And the door on the Cargoshell rolls up and out of the way, unlike traditional containers with the doors that swing out. The roll-up doors will let the container be placed closer to each other and still be accessible.
Gout is the most common type of inflammatory arthritis. It is caused by a buildup of uric acid in the body and the formation of uric acid crystals in the joints. When buildups of uric acid crystals are attacked by the body’s immune system, the affected joints become red, hot, swollen, and sore. People with gout can have flares of extremely painful, warm, red and swollen joints. The big toe is the most common joint affected but other joints can be affected as well. The best way to manage gout is to keep levels of uric acid in a healthy range. Foods including meats, fish, seafood, alcohol, and sugary drinks increase uric acid levels in the body. Who Gets Gout Everybody makes uric acid but when the levels are too high it can lead to gout. Gout is most likely to start in men in their 40s or 50s. Gout almost never occurs in women until they reach the menopause or if they have a kidney problem. Gout often runs in families so it is likely that genetics play a role in the development of gout. Men who have family members affected by gout have a higher chance of having gout themselves. This usually involves a genetic problem with the kidneys and how they handle uric acid. Gout is more common in people with kidney problems or those taking certain medications such as diuretics (water pills). Sudden and Severe Pain in a Joint Gout presents itself in the form of an attack (or flare) that usually happens very suddenly and often in the early hours of the morning. It’s not uncommon for a person with gout to go to bed feeling fine, and then wake up with a joint that feels like it’s on fire. In most cases only one joint is affected at first. The big toe is the most common joint to be affected in people with gout. Other likely targets are the ankle and knee. Multiple Joint Involvement Over time, more than one joint can become affected. Other joints commonly affected by gout include the ankle or foot and the knees. An attack on these joints can make it hard to walk or sleep. When gout becomes severe, the wrists, elbows and fingers can be involved. Tophi (Deposits of Hardened Uric Acid) When uric acid levels remain too high for a long time, it can start to crystallize in other tissues and form deposits of hardened uric acid. These are called tophi (pronounced Toe-Fie). Most often tophi develop over the elbows, on the backs of the hands, and in the finger pads. They can also be found on the tendons behind your ankles or on the outer edges of the ears. Tophi can be painful. If left untreated, tophi can rupture or cause damage to nearby tendons and bone. Gout can be a serious, long-term (chronic) condition. Some people can still have mini-attacks where the joint stays swollen all of the time. These “mini-attacks” aren’t as painful as full ones but affected joints can still feel very sore. When uric acid levels stay high crystals continue to form causing the gout to “grumble” along. In the long run, this can cause damage to the joints and even destroy them. Duration and Frequency of Flares Gout flares usually go away after about 7 to 10 days of treatment. It might be several months or years before another flare happens. If a person has had one attack of gout, chances are that they will eventually have others. Gout is often diagnosed by a primary care physician (family doctor). If there are questions a rheumatologist, a type of doctor that specializes in arthritis and autoimmune disease can be consulted. To diagnose Gout, doctors will take a careful and complete history and perform a thorough physical examination. They will note if their patient has risk factors for gout, such as high blood pressure or diabetes, and which joints are affected. Gout often attacks only one joint in the toe, ankle, or knee. Based on this information, the doctor will likely order tests like blood tests and might draw a sample of fluid from an affected joint to inspect. It’s important for doctors to rule out other diseases that can sometimes look like gout. Common Tests to Diagnose Gout Joint Fluid Test To look for crystals of uric acid: a needle is used to draw a sample ofthe fluid in an affected joint (the fluid in between joints is called synovial fluid). The fluid can be inspected under a microscope to look for uric acid crystals. To rule out an infection: Sometimes gout can look like an infected joint. The fluid extracted from an affected joint can be cultured on a laboratory dish to see whether or not bacteria grow. It will only grow if the joint has an infection. Looking for high levels of uric acid in the blood: Patients with gout have high levels of uric acid in their blood. Uric acid levels can fall during an acute attack of gout which can make the diagnosis confusing. Some patients with high levels of uric acid don’t develop gout. The results of this test must be taken into context with a patient’s symptoms and the results of other tests. Looking at kidney function: A blood test for creatinine test helps doctors assess kidney function. The results of this test is important to consider because the kidneys clear uric acid from the body. Ultrasound: Musculoskeletal (MSK) ultrasound can show crystals of uric acid in a joint or identify tophi (hardened deposits of uric acid). X-Rays: Can help rule out other types of arthritis and can sometimes see classic changes of chronic gout. In gout, the body’s immune system attacks crystals of uric acid that have formed inside the joints, causing the affected joints to become red, hot, swollen and sore. These crystals form when the level of uric acid is too high. How Uric Acid Builds Up The body is constantly making uric acid and is also getting it from many types of food. In healthy people who don’t suffer from gout, their kidneys keep a steady level of uric acid in the blood by filtering excess amounts out and getting rid of it in urine. If the kidneys can’t keep up, uric acid will start building up in the blood. If the level gets high enough, it will cross a point where it starts to form crystals. The crystals collect in the fluid inside joints. The body’s immune system thinks the uric crystals are foreign invaders and attacks them, causing the pain and inflammation that gout is known for. There are several reasons why the kidneys might not be able to get rid of excess uric acid fast enough. It can happen naturally as we get older. Other times people with healthy kidneys simply have a hard time getting rid of uric acid. This is likely genetic and can run in families. Other people have kidney diseases that reduce the function of the kidneys. A person’s diet can influence if they get gout. Gout has been historically described as the “disease of kings” because it seemed to target wealthy nobility. That’s because a long time ago, it was correctly thought that gout was the result of eating a diet that was rich in meats and alcohol. Animal foods and alcoholic beverages are a rich source of uric acid. Sweetened drinks (soft drinks, sodas, and fruit juices) containing high-fructose corn syrup can also play a role. Certain medications can interfere with the kidneys’ ability to get rid of uric acid and can lead to gout. The most common example are “water pills” (diuretics), a type of medication that is often used to treat high blood pressure or lower leg swelling. Common risk factors that can increase a person’s chances of developing gout include: kidney problems, age, gender (male), high alcohol intake, excessive intake of sweetened drinks, diuretics (water pills), and a family history of gout. Other associated diseases include high blood pressure, high cholesterol, diabetes or insulin resistance, thyroid disease, and obesity are also seen with gout. Gout should be treated in two ways. The first way is to treat any immediate painful attack (flare). The second way is to treat the big picture problem of too much uric acid in the body. Addressing the problem off too much uric acid can help reduce the number of future attacks and their severity, and can minimize long-term damage to the joints and other tissues that can be caused by uric acid crystals. Immediate Treatments for Attacks Rest, Ice, Compression, and Elevation A good way to start treating gout is using Rest, Ice, Compression, and Elevation. This is the same way athletes treat acute injuries like a sprained ankle. For example, if a foot is affected by a flare of gout, a patient should put it up on a chair with a pillow, apply a cold pack (if it’s not too cold or uncomfortable), and rest it. Non-Steroidal Anti-Inflammatory Drugs The most common medicines used to treat attacks of gout are Non-Steroidal Anti-Inflammatory Drugs. They tend to work really well as long as a person is taking enough of the right NSAID for the job. The most used anti-inflammatory drug indomethacin. Other common anti-inflammatory drugs used for gout are Celebrex (celecoxib), Aleve (naproxen), ibuprofen (sold over-the-counter with the brand names Advil or Motrin), and Voltaren (diclofenac). Non-Steroidal Anti-Inflammatory Drugs can increase blood pressure, affect the kidneys, and irritate the stomach, potentially causing ulcers, so it is always best to check with a doctor before taking them. An older but very effective therapy for gout is Colcrys (colchicine). This is a natural remedy that comes from a toxic flower called the meadow saffron (also known as the autumn crocus or naked lady). Colcrys interferes with the body’s inflammatory response in a way that is beneficial to people suffering from a gout attack. In older times, small amounts of this toxic plant was used to cleanse the bowels because it causes diarrhea. The same thing happens today to patients who take too much colchicine! At most, people with gout typically tolerate 0.6 mg taken two to three times a day. The synthetic corticosteroid prednisone is a very effective treatment for gout. High doses of prednisone taken for a week or two will often clear up gout attacks. One issue with this medication is it will raise blood sugar levels in people with diabetes. Joints can be injected with the natural steroid cortisone to quickly reduce inflammation. This is a very effective remedy and is relatively low risk because it is a local therapy that is precisely focused on the affected joints. A newer treatment for acute attacks of gout is Anakinra (Kineret). Anakinra is a biologic medication that blocks a molecule called Interleukin-1 and reduces the inflammations with gout. It is typically given as a daily subcutaneous injection of 100 mg for three days. Big-Picture Treatments to Reduce Attacks Medications that Lower Uric Acid Levels Zyloprim (allopurinol) and Uloric (febuxostat) are two good and very similar medications that treat gout by lowering the level of uric acid in the body. They work the same way on uric acid. The difference between them is that the body gets rid of allopurinol through the kidneys and febuxostat through the liver. Starting or stopping one of these medicines can trigger an attack of gout. Patients who start taking one of these medications should never stop because of an attack – this will make things even worse. In such cases, patients should see their doctor to treat the new attack with different medications. Over time, Allopurinol and Uloric will help prevent attacks of gout. Maintaining Overall Health Gout tends to happen in people who have other health problems. For example, gout is more common in people who have high blood pressure, high cholesterol, heart disease, diabetes and in those who are overweight or obese. Improving health is very likely to improve gout. People with gout should strive to improve their overall health, and do everything they can to keep blood pressure and cholesterol at healthy levels. For people that also have diabetes, it is important to keep blood sugar under control. Smokers are urged to quit. People with gout can improve their condition by changing their diet to avoid or minimize foods and beverages that are known to significantly increase uric acid in the body. Meats and animal products, fish, seafood, and alcoholic beverages (especially beer) are all known to increase uric acid levels, which make an attack of gout more likely to occur. Sweetened drinks like soft drinks, sodas, and fruit juices can also play a role in gout, especially when they contain high-fructose corn syrup. These drinks make it harder for the kidneys to clear out excess uric acid. Recent research has shown that all vegetables are absolutely fine and do not contribute to gout symptoms. In the old days, scientists used to think that vegetables high in a type of chemical called purines were harmful. We now know that plant purines won’t affect gout. Attacks can also sometimes be triggered by dehydration and trauma. It’s important to stay well hydrated so the body has enough water to clear excess uric acid through urine. Watch Canadian rheumatologist Dr. Andy Thompson discuss Gout in this short video:
It’s no secret that the earth is in crisis. And Earth Day is a time to acknowledge how the natural systems that sustain humans and nonhumans on earth are shifting before our very lives. Deforestation combined with increased human-caused — or anthropogenic — sources of greenhouse gas emissions and other anthropogenic pressures are exacerbating global climate change. The result: acidification of our oceans, global temperature rise, extreme weather events, mass extinction, and biodiversity loss, among other ill effects. These impacts, which often disproportionately harm the most vulnerable communities, must be met with real and serious action by all of us. What can an individual or community possibly do to make a difference? We can take a cue from the 2022 Earth Day theme and “Invest In Our Planet” by implementing nature-based solutions at our homes or in our communities. Nature-based solutions involve the services that nature’s ecosystems provide and are defined by the International Union for Conservation of Nature (IUCN) as “actions to protect, sustainably manage, and restore natural or modified ecosystems, that address societal challenges effectively and adaptively, simultaneously providing human well-being and biodiversity benefits.” Specifically, nature-based solutions may involve capturing water from storm runoff, supporting healthy and fertile soil (for erosion prevention and food and other crop production), sequestering carbon and oxygen generation, filtering air and water, and providing habitat for wildlife and essential pollinators like native insects and birds. Are you ready to help protect your community, support and restore healthier ecosystems, and address climate change? “At Highstead, we take an ecologically-minded approach to stewarding our natural areas and cultivated landscapes in ways that demonstrate methods of sustainable ecological design and management,” says Kathleen Kitka, Highstead Landscape and Collections manager. “We want to show how to enhance habitat diversity and conserve native plants and wildlife.” This strategy plays out in ecologically landscaped settings from the Highstead Barn, to preserved habitat like the forested wetland. Take a tour of some of the nature-based solutions and strategies employed across the Highstead landscape. Situated above a wildflower meadow and below an oak forest, this one-acre landscape is centered on Highstead’s Barn headquarters building. The Barn landscape is similar to a residence and serves to demonstrate management of a residential site as a low maintenance, ecologically sound, and aesthetically pleasing naturalistic landscape. Native plantings blend the Barn aesthetically into its natural surroundings, create habitat for wildlife, reduce maintenance and pollution, and help maintain a sense of place. Oaks and other tree species pull carbon dioxide from the atmosphere and store it in their woody biomass, and their storage capacity grows the longer trees are allowed to age. Even more, scientists at the Birmingham Institute of Forest Research recently found that mature oak trees (Quercus robur) with sufficient and available nutrients, increased their photosynthetic response when exposed to elevated carbon dioxide levels. The oak genus (Quercus), consisting of over 90 species in North America, also offers substantial pollinator power by supporting 897 caterpillar species in the United States along with other insect species, more than any other native tree or plant. On the side of the built environment, tree cover provides additional benefits, like those described in Highstead’s senior ecologist Ed Faison’s 2021 Arnoldia article, Backyard Natural Climate Solutions. Faison detailed how trees standing within sixty feet of his house provide summer cooling and winter insulation, resulting in decreased energy expenditure and reduced carbon emissions. On the hillside east of the Barn, the wildflower meadow fills a substantial viewshed and is an example of a lawn alternative. This two-acre ecosystem was created following construction of the adjacent pond. It was initially planted with a mix of clover and grass seed to prevent erosion and one half of the expanse was subsequently seeded with North American native prairie grasses and forbs. This portion of the meadow was found to be less susceptible to colonization by invasive species than the unseeded half. The meadow is maintained as a habitat for wildlife, including songbirds, butterflies, and other pollinating insects that depend on native plants and grasses to complete their lifecycles. It is mowed annually to prevent transitioning to forest. If you don’t have an existing meadow or space for one in your backyard, you can plant a wildflower garden as a foundation planting near the house or in ornamental plant containers on your deck or patio. Fed by an intermittent stream flowing from the adjacent wooded swamp, this nearly three-acre human-made pond was created to enhance the diversity of native plants and habitats on the property. Situated downhill from the Barn, it also provides an aesthetic central point for the Highstead landscape. Ponds and wetlands play an important role slowing the flow of storm runoff which reduces urban flooding. The pond maintains a fairly stable year-round water level, supporting aquatic and wetland vegetation diversity, which serves as habitat for wildlife associated with inland ponds and marshes like wood duck (Aix sponsa), painted turtle (Chrysemys picta), and red-spotted newt (Notophthalmus viridescens). A buffer zone of un-mowed vegetation is maintained along the pond edge for wildlife habitat, water quality, and visual continuity with the adjacent meadow. The south and west edges of the pond were altered during its construction, and restoring this area to a more naturalistic state led to plantings of native trees and shrubs whose naturally-growing counterparts are indigenous to the adjacent forested swamp – including Clethra alnifolia, or sweet pepperbush, a favorite of bees when they bloom around the pond and throughout the wetland at the height of summer. If you have a garden or orchard and desire to attract pollinators, then Clethra is a helpful plant—it is also beautiful and fragrant! Adjacent to the pond is a red maple swamp, a forested wetland ecosystem. Wetlands act as holding basins for storms and runoff. Wetlands temporarily stem this runoff by reducing the velocity of the water and releasing it into the environment gradually over time, thus lowering the severity of flooding and downstream erosion. In addition, wetlands can filter and clean water of excess sediments and chemicals naturally, and on a larger scale, are essential storage for carbon, as their plant communities and soils are suitable for preventing atmospheric carbon dioxide release. At the same time, this forested wetland serves as essential habitat for rare native species and provides suitable territory for native tree and shrub species, benefiting pollinators and diverse wildlife. Red maple and yellow birch dominate the overstory. Spicebush, winterberry, and sweet pepperbush comprise the tall shrub layer. Skunk cabbage and cinnamon fern fill out the thick herbaceous layer along with various sedges, marsh marigold, and marsh blue violet. In addition to common animal species such as bobcat, barred owl, and spotted turtle, at least one rare species inhabits the swamp – Eastern box turtles (Terrapene carolina carolina), a species of special concern in Connecticut. An inventory of lichens at Highstead conducted by Douglas Ladd of the Missouri Nature Conservancy found that “the low wet valley of the swamp contains some of the most sensitive lichens, including species restricted to high-quality natural habitats.” Oak-Mountain Laurel Forest Rocky ledges and dry, acidic soils on the western half of the property support 100+-year-old oak trees with a towering mountain laurel (Kalmia latifolia) understory that spreads over 55 acres. These oaks are powerful players for carbon sequestration and air and water filtration, and they provide habitat for forest-dwelling fauna like pileated woodpeckers, wild turkeys, and white-tailed deer. Mountain laurel are widespread throughout Northeastern U.S. forest understories and have a mutually beneficial relationship with mycorrhizal fungi. These soil microorganisms support Kalmia by providing better access to soil nutrients, like nitrogen, and Kalmia provide fungi with carbon from photosynthesis. This type of nutrient cycling is a crucial component of healthy forest ecosystems. At the northwest corner of the property, a two-acre enhanced oak woodland area demonstrates a naturalistic landscape that is fenced for protection from deer. Several species of native deciduous azaleas and companion plants were added for aesthetics, plant diversity, and to lengthen the flowering season of the woodland. The site is allowed to evolve naturally and is presently undergoing a dramatic increase in herbs and woody plant regeneration due to the elimination of deer. On the eastern hilltop drumlin is a meadow consisting of introduced grasses, a common land type from the conclusion of the colonial period in New England, and is preserved for its scenic and cultural heritage and habitat value. It is an uncommon habitat in southern New England due to agricultural decline, natural successional processes, and increased development. Although the grasses are not native, locally adapted native forbs like common milkweed (Asclepia syriaca) and Indian hemp (Apocynum cannabinum) are interspersed throughout and are a magnet for pollinating insects such as bees, wasps, and butterflies. Milkweed plays an essential role in the survival of the monarch caterpillars who rely on this plant for food. Today the meadow serves as critical nesting habitat for the migratory bobolink (Dolichonyx oryzivorus), a Connecticut species of special concern. Highstead mows the grassland meadows in the late summer to ensure that young birds have an opportunity to fledge successfully. In addition to habitat for grassland-adapted flora and fauna, grassland meadows sequester carbon by fixing it underground, in contrast to forests where carbon is aboveground and stored in woody biomass and leaves. Everybody has a role when it comes to climate action. You can start small by planting natives in containers or strengthening your existing gardens with pollinator-friendly species. You can think big without dismissing your entire landscape and work around nonnatives, or preserve your forest, meadow, or pond as a conservation corridor to support biodiversity and absorb carbon dioxide. Your sustained connection with the planet is vital, so if you are ecologically landscape inclined, consider the significant improvements you can make by applying nature-based solutions and strategies to your home or community spaces. Learn More About Nature-Based Climate Solutions - Backyard Climate Solutions by Edward Faison - Doug Tallamy on a New Conservation Approach in Your Backyard - 3 Ways to Support Your Local Pollinator at Home - Native Plant Society of the United States (network) - Save Plants, Save The Planet, Save Ourselves — Native Plants and Nature-Based Solutions to Climate Change And Other Threats to Humanity (Virginia Native Plant Society)
Fishermen interviews in Golabandha, Odisha © WTI/IUCN The objectives of this project are to: Marine megafauna aggregate in large numbers along the off-shore waters (seasonally) as part of their breeding and feeding requirement (Eg. Olive Ridley turtles Lepidochelys olivacea). Along with other megafauna whale sharks (Rhincodon typus) are also known to aggregate spatially. Despite its widespread distribution, very little is known about this giant fish. Although regular sightings have been reported across the globe, only very few studies have been conducted across the world on whale sharks. Studies suggest that the species may exhibit variable behavioural traits, and thus, local, isolated conservation initiatives restricted to a particular zone or state may not be sufficient for effective conservation of the species. Thus, more consolidated approaches across a larger ‘landscape’ of marine environment will be required to successfully conserve the species. In India, records of the presence of whale sharks in the form of landings come from as far back as 1889. In fact, the only detailed information available from India is mostly from reports on beached whale sharks. Additionally, it is well known that whale sharks have been persecuted in large numbers along the coast of Gujarat, and possibly other states too, primarily for the oil produced from their liver. Only one long term research and conservation project on whale shark has been initiated in India so far. This ongoing project, initiated by the WTI along with TATA Chemicals Limited, focuses on spreading awareness on the plight of the species, understanding its biology and carry out the feasibility of whale shark tourism for its long term survival. The project focuses on understanding the biology, demography and ecology of the species as well. Satellite tagging of one of the whale shark populations has yielded first hand information on the local movement of these animals between different coastal states along the west coast (WTI, unpublished data). In Gujarat, WTI’s efforts over the last 12 years have helped to not only identify large aggregates of whale sharks along the coastlines, but also put an end to the mindless slaughter of the whale sharks. Under this continued effort to stop the organized hunting of the species, additional data from several other maritime states other than Gujarat is required, pertaining to hunting, beaching as well as just temporal presence. In 2012, WTI initiated a questionnaire based survey across the west coast of the India (four states), in order to achieve the above mentioned objective. The project revealed crude but crucial information of whale shark aggregation along the west coast of India. The proposed project aim to collect information on the spatial and seasonal aggregation of whale sharks along with other marine megafauna through secondary accounts of fishers and to understand how the coastal fishing communities can be benefited out of this information to improve their livelihood. Similar survey along Andhra Pradesh coast resulted in identification of whale shark aggregation off-shore to Coringa mangroves. In the proposed project we would like investigate whether off-shore waters of Bitharkanika mangroves also harbors any congregation of Marine Megafauna. Once the project is launched, the state department will benefit from the information mined through the project, which will also help them to implement localised management plans. Additionally the information generated will contribute to the global understanding of the Whale shark distribution and habitat preferences along with other megafauna. Additionally, once important hotspots are identified, local fishing communities will be targeted for involvement in active conservation measures, such as rescues, for which appropriate compensatory schemes will also be developed for their benefit. Livelihood linkages: If hotspots are found across the study area, local fishing communities will be sensitised about the presence of the whale sharks/ other marine megafauna. Coastal tourism can be promoted with community participation around these hot spots. This in turn can generate extra income for coastal fishing communities thus by reducing the livelihood dependences on the marine resources. All of this will constitute future continuation phases of the current project, under a long term conservation approach The project outputs will be: East coast of India is more prone to cyclones and about 80 per cent of the total cyclones generated in the Indian Ocean strike the east coast of India. There are two definite seasons of tropical cyclones in the North Indian Ocean. One is from May to June and the other from mid-September to mid-December. May, June, October and November are known for severe storms. The destructive effect of cyclonic storms is confined to coastal districts and the maximum destruction being within 100 km from the centre of the cyclones and on the right of the storm track. Death and destruction purely due to winds are relatively less. The collapse of buildings, falling trees, flying debris, electrocution, rain and aircraft accidents and disease from contaminated food and water in the post-cyclone period also contribute to loss of life and destruction of property. 18th Jul 2016 to 17th May 2017 Odisha State Forest Department Dr. BC Choudhury Wildlife Trust of India
Definition of vapor–pressure thermometer : a thermometer in which the variable saturated vapor pressure of a volatile liquid is used as a measure of the temperature and which thus has the advantage over some other types of thermometers of being free from errors due to bulb expansion Love words? You must — there are over 200,000 words in our free online dictionary, but you are looking for one that’s only in the Merriam-Webster Unabridged Dictionary. Start your free trial today and get unlimited access to America's largest dictionary, with: - More than 250,000 words that aren't in our free dictionary - Expanded definitions, etymologies, and usage notes - Advanced search features - Ad free! Seen and Heard What made you want to look up vapor–pressure thermometer? Please tell us where you read or heard it (including the quote, if possible).
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | James Samuel Coleman (May 12, 1926 – March 25, 1995) was a renowned American sociologist, theorist and empirical researcher. He was elected president of the American Sociological Association. Coleman studied the sociology of education, public policy, and was one of the earliest users of the term "social capital". His Foundations of Social Theory influenced sociological theory. His The Adolescent Society (1961) and "Coleman Report" (Equality of Educational Opportunity, 1966) were two of the most heavily cited books in educational sociology . The landmark Coleman Report helped transform educational theory, reshape national education policies, and influenced public and scholarly opinion regarding the role of schooling in determining equality and productivity in the United States. The son of James and Maurine Coleman, he spent his early childhood in Bedford, Indiana, and then moved to Louisville, Kentucky. After graduating in 1944, he enrolled to a small school in Virginia but left to enlist in the U.S. Navy during World War II. After he was discharged, he transferred to Purdue University. Coleman received his bachelor's degree in Chemical Engineering from Purdue University in 1949, and received his Ph.D. from Columbia University in 1955, where he came under the influence of Paul Lazarsfeld. Coleman achieved renown with two studies on problem solving: An Introduction to Mathematical Sociology (1964) and Mathematics of Collective Action (1973). He taught at Stanford University and then at the University of Chicago. In 1959 he moved to Johns Hopkins University where he taught until 1973 before returning to Chicago, where he then directed the National Opinion Research Center. In 1991 Coleman was elected President of the ASA. Coleman is widely cited in the field of sociology of education. In the 1960s, he and several other scholars were commissioned by the US Department of Education to write a report on educational equality in the US. It was one of the largest studies in history, with more than 150,000 students in the sample. The result was a massive report of over 700 pages. That 1966 report — titled "Equality of Educational Opportunity" (or often simply called the "Coleman Report") — fueled debate about "school effects" that has continued since. The report was commonly presented as evidence, or an argument, that school funding has little effect on student achievement. A more precise reading of the Coleman Report is that student background and socioeconomic status are much more important in determining educational outcomes than are measured differences in school resources (i.e. per pupil spending). At the same time, differences in schools, and particularly teachers, have a very significant impact on student outcomes. Coleman found that, on average, black schools were funded on a nearly equal basis by the 1960s. This research also suggested that socially disadvantaged black students profited from schooling in racially-mixed classrooms (a finding subsequently confirmed by other research). This was a catalyst for the implementation of desegregation busing systems, ferrying black students to integrated schools. Following up on this, in 1975 Coleman published the results of further research, this time into the effects of school busing systems intended to bring lower-class black students into higher-class mixed race schools. His conclusion was that white parents moved their children out of such schools in large numbers; this is known as "white flight". His 1966 article had explained that black students would only benefit from integrated schooling if there was a majority of white students in the classroom; the mass busing system had failed. Coleman's findings regarding "white flight" were not well received in some quarters, particularly among some members of the American Sociological Association. In response, efforts sprang up during the mid 70s to revoke his ASA membership. Coleman remained a member and ironically twenty years later became the ASA's president. Yet another controversial finding of the report showed that 15 percent of black students fell within the same range of academic accomplishment as the upper 50 percent of white students. This same group of blacks, however, scored higher than the other 50 percent of whites. Therefore the findings offer little to racist arguments. Additionally, Asian-Americans repeatedly met and exceeded the achievement levels of whites. The tests administered in these schools, however, were not measuring intelligence, but rather an ability to learn and perform in the American environment. The report states: "These tests do not measure intelligence, nor attitudes, nor qualities of character. Furthermore they are not, nor are they intended, to be 'culture free.' Quite the reverse: they are culture bound. What they measure are the skills which are among the most important in our society for getting a good job and moving to a better one, and for full participation in an increasingly technical world." Coleman was a pioneer in the construction of mathematical models in sociology with his book, Introduction to Mathematical Sociology (1964). His later treatise, Foundations of Social Theory (1990), made major contributions toward a more rigorous form of theorizing in sociology based on rational choice. Coleman wrote more than thirty books and published numerous articles. He also created an educational corporation that developed and marketed "mental games" aimed at improving the abilities of disadvantaged students. Coleman made it a practice to send his most controversial research findings "to his worst critics" prior to their publication, calling this "the best way to ensure validity." At the time of his death, he was engaged in a long-term study titled "The High School and Beyond," which examined the lives and careers of 75,000 people who had been high school juniors and seniors in 1980. - Union Democracy (1956, with Seymour Martin Lipset and Martin Trow) - The Adolescent Society (1961) - Introduction to Mathematical Sociology (1964) - Equality of Educational Opportunity (1966) - Youth: Transition to Adulthood (1973) - High School Achievement (1982) - Individual Interests and Collective Action (1986) - Social Theory, Social Research, and a Theory of Action, article in American Journal of Sociology 91: 1309-1335 (1986). - Social Capital in the Creation of Human Capital, article in The American Journal of Sociology, Vol. 94, Supplement: Organizations and Institutions: Sociological and Economic Approaches to the Analysis of Social Structure, pp. S95-S120 (1988). - Foundations of Social Theory (1990) - Redesigning American Education (1997, with Barbara Schneider, Stephen Plank, Kathryn S. Schiller, Roger Shouse, & Huayin Wang) - ↑ Jon Clark, James S. Coleman (1996) pp 36-41 - ↑ Geoffrey D. Borman and Maritza Dowling, "Schools and Inequality: A Multilevel Analysis of Coleman's Equality of Educational Opportunity Data," Teachers College Record, May 2010, Vol. 112 Issue 5, pp 1201-1246 - ↑ 3.0 3.1 Kiviat, Barbara J. (2000) "The Social Side of Schooling" Johns Hopkins Magazine April 2000, accessed 30 December 2008 - ↑ 4.0 4.1 Hanushek, Eric A. (1998) "Conclusions and Controversies about the Effectiveness of School Resources" Economic Policy Review Federal Reserve Bank of New York, 4(1): pp. 11-27, accessed 30 December 2008 - ↑ Eric A. Hanushek (2003) "The failure of input-based schooling policies." Economic Journal 113, no. 485 (February): F64-F98. - ↑ Raymond Wolters, Race and Education, 1954-2007 (University of Missouri Press, 2008) chapter 6 - ↑ Joshua D. Angrist and Kevin Lang (2004) "Does school integration generate peer effects? Evidence from Boston's Metco Program." American Economic Review 94, no. 5 (December): 1613-1634; Steven G. Rivkin and Finis Welch (2006) "Has school desegregation improved academic and economic outcomes for blacks?" In Handbook of the Economics of Education, edited by Eric A. Hanushek and Finis Welch. Amsterdam: North Holland: 1019-1049; Eric A. Hanushek, John F. Kain, and Steve G. Rivkin (2009) "New evidence about Brown v. Board of Education: The complex effects of school racial composition on achievement." Journal of Labor Economics 27, no. 3 (July): 349-383. - ↑ Kuran, Timur. (1997). Private truths, public lies: the social consequences of preference falsification. Cambridge, MA: Harvard University Press, p. 149. - ↑ Coleman, J. S. (1989). Response to the sociology of education award. Academic Questions, 2, p. 76–78. - ↑ Editor's personal conversation with James S. Coleman - Obituary in the University of Chicago Chronicle - American National Biography Online - Photo of James Coleman |This page uses Creative Commons Licensed content from Wikipedia (view authors).|
At least four complex processes, alone or combined, can lead to diabetic heart disease (DHD). They include coronary atherosclerosis; metabolic syndrome; insulin resistance in people who have type 2 diabetes; and the interaction of coronary heart disease (CHD), high blood pressure, and diabetes. Researchers continue to study these processes because all of the details aren't yet known. Atherosclerosis is a disease in which plaque builds up inside the arteries. The exact cause of atherosclerosis isn't known. However, studies show that it is a slow, complex disease that may start in childhood. The disease develops faster as you age. Coronary atherosclerosis may start when certain factors damage the inner layers of the coronary (heart) arteries. These factors include: - High amounts of certain fats and cholesterol in the blood - High blood pressure - High amounts of sugar in the blood due to insulin resistance or diabetes Plaque may begin to build up where the arteries are damaged. Over time, plaque hardens and narrows the arteries. This reduces the flow of oxygen-rich blood to your heart muscle. Eventually, an area of plaque can rupture (break open). When this happens, blood cell fragments called platelets (PLATE-lets) stick to the site of the injury. They may clump together to form blood clots. Metabolic syndrome is the name for a group of risk factors that raises your risk of both CHD and type 2 diabetes. If you have three or more of the five metabolic risk factors, you have metabolic syndrome. The risk factors are: - A large waistline (a waist measurement of 35 inches or more for women and 40 inches or more for men). - A high triglyceride (tri-GLIH-seh-ride) level (or you’re on medicine to treat high triglycerides). Triglycerides are a type of fat found in the blood. - A low HDL cholesterol level (or you're on medicine to treat low HDL cholesterol). HDL sometimes is called "good" cholesterol. This is because it helps remove cholesterol from your arteries. - High blood pressure (or you’re on medicine to treat high blood pressure). - A high fasting blood sugar level (or you're on medicine to treat high blood sugar). It's unclear whether these risk factors have a common cause or are mainly related by their combined effects on the heart. Obesity seems to set the stage for metabolic syndrome. Obesity can cause harmful changes in body fats and how the body uses insulin. Chronic (ongoing) inflammation also may occur in people who have metabolic syndrome. Inflammation is the body's response to illness or injury. It may raise your risk of CHD and heart attack. Inflammation also may contribute to or worsen metabolic syndrome. Research is ongoing to learn more about metabolic syndrome and how metabolic risk factors interact. Insulin Resistance in People Who Have Type 2 Diabetes Type 2 diabetes usually begins with insulin resistance. Insulin resistance means that the body can't properly use the insulin it makes. People who have type 2 diabetes and insulin resistance have higher levels of substances in the blood that cause blood clots. Blood clots can block the coronary arteries and cause a heart attack or even death. The Interaction of Coronary Heart Disease, High Blood Pressure, and Diabetes Each of these risk factors alone can damage the heart. CHD reduces the flow of oxygen-rich blood to your heart muscle. High blood pressure and diabetes may cause harmful changes in the structure and function of the heart. Having CHD, high blood pressure, and diabetes is even more harmful to the heart. Together, these conditions can severely damage the heart muscle. As a result, the heart has to work harder than normal. Over time, the heart weakens and isn’t able to pump enough blood to meet the body’s needs. This condition is called heart failure. As the heart weakens, the body may release proteins and other substances into the blood. These proteins and substances also can harm the heart and worsen heart failure.
US 5884273 A The present invention is a hand-held micro-computer with an attached printer that receives input from a physician and prints out a legible prescription slip for the physician's signature. The micro-computer has a keypad, display, and memory. The memory stores information about prescription drugs and physicians who have access to the micro-computer. When a physician wants to "write" or prepare a prescription slip, the physician enters a personal identification number, thus gaining access to the micro-computer. The physician then selects a drug to prescribe, either by entering a drug identification number or scrolling through a list of drug names. After the specific drug has been selected, the physician may change the default information for that drug or accept it. Once the information is correct, the physician prints out the prescription slip on the attached printer. The prescription slip contains all the relevant and necessary information for the patient and pharmacists and need only be signed by the physician before it can be filled. 1. A method of writing a prescription slip, said prescription slip having a drug name, quantity, strength, and dosage, said prescription slip being generated by a micro-computer and printer, said micro-computer having a keypad, a display, and a memory, said memory storing information on a plurality of drugs and physicians, said method comprising the steps of: inputting a physician identification number corresponding to one of said plurality of physicians stored in said memory; selecting an entry from a menu shown on said display of said micro-computer; entering an identification corresponding to a drug, said drug having default characteristics, said default characteristics corresponding to said drug name, said quantity, said strength, and said dosage; and printing said prescription slip from said printer. 2. The method of writing a prescription slip as recited in claim 1, further comprising the step of modifying said default characteristics of said drug. 3. The method of writing a prescription slip as recited in claim 1, further comprising the step of choosing a refill amount for said drug, said refill amount appearing on said prescription slip after said prescription slip has been printed. 4. The method of writing a prescription slip as recited in claim 1, wherein said inputting step, said selecting step and said entering step are performed by depressing a series of keys on said keypad of said micro-computer. 5. The method of writing a prescription slip as recited in claim 1, further comprising the step of choosing whether to allow substitutions for said drug. 6. The method of writing a prescription slip as recited in claim 1, wherein said identification is a drug identification number. 7. The method of writing a prescription slip as recited in claim 1, wherein said identification is said drug name. 8. An apparatus for use in preparing a prescription slip, said apparatus comprising: a micro-computer having memory means for storing information about prescription drugs, said information including names of said prescription drugs, selecting means for said user to select a prescription drug from said memory means; and a printer operatively connected to said micro-computer and responsive to said micro-computer so that said printer prints said prescription slip when said user selects said prescription drug from said memory, said prescription slip containing said name of said prescription drug wherein said apparatus will fit into the palm of the user. 9. The apparatus as recited in claim 8, wherein said information contains nominal dosages and strengths for said prescription drugs. 10. The apparatus as recited in claim 8, wherein said information contains nominal dosages and strengths for said prescription drugs, and wherein said printer prints said dosage and strength of said prescription drug on said prescription slip. 11. The apparatus as recited in claim 8, wherein said information contains nominal dosages and strengths for said prescription drugs, and wherein said selecting means allows the user to change said dosage and said strength of said prescription drug. 12. The apparatus as recited in claim 8, wherein said user selects said prescription drug by inputting a drug identification number. 13. The apparatus as recited in claim 8, wherein said user selects said prescription drug by selecting said name of said prescription drug from a list of said prescription drugs in said memory means. 14. The apparatus as recited in claim 8, wherein said micro-computer further comprises means for updating said information about said prescription drugs. 15. A system for generating a prescription slip, said system comprising: at least one micro-computer having communication means for communicating with said computer, said micro-computer having memory means for storing information about prescription drugs, said information including names of said prescription drugs, selecting means for said user to select a prescription drug from said memory means; and a printer operatively connected to said micro-computer and responsive to said micro-computer so that said printer prints said prescription slip when said user selects said prescription drug from said memory, said prescription slip containing said name of said prescription drug. 16. The system as recited in claim 15, wherein said communication means enables said information about said prescription drugs and information about physician's to be transferred between said computer and said micro-computer. 17. The system as recited in claim 15, wherein said information includes nominal dosages and strengths for said prescription drugs, and wherein said selecting means allows the user to change said dosage and said strength of said prescription drug. 18. The system as recited in claim 15, wherein said printer prints a dosage and strength for said prescription drug, a quantity, and refill amount for said prescription drug, a Drug Enforcement Agency number for a physician, and a location for a physician's signature and patient's name on said prescription slip. 1. Field of the Invention The present invention relates to a device for dispensing prescription slips. In particular, the present invention relates to a hand-held microcomputer and printer that will receive input and generate a prescription slip from a physician to a patient. 2. Discussion of Background At times a physician's handwriting can be illegible, causing problems for medical personnel, pharmacists, and patients. In fact, the poor handwriting of physicians has become legendary. This problem arises when physicians make entries into the medical file of the patient, leave instructions for nurses, order procedures for patients, and prepare prescription slips for patients and pharmacists. Several devices are available that alleviate the necessity of physicians writing the above information by hand. For example, physicians normally dictate information to be entered into the patient's file and instructions to nursing personnel, to reduce discrepancies between what is ordered and what is to be done. Additionally, word processors are sometimes used to enter information into patient files. However, when physicians write prescriptions containing the name of a type of drug, amounts, and dosages, the writing can be illegible both to pharmacists and patients. If the pharmacist cannot read the prescription, the pharmacist may need to call the physician's office to clarify the handwriting. This not only delays filling the prescriptions, especially if they are being filled after office hours, but patients who urgently need medication are forced to wait unnecessarily. And from a physician's viewpoint, clarification takes time away from other office staff duties. If the pharmacist does not call the physician, other complications or errors may occur. In addition, many physicians abbreviate common drug names, which sometimes results in misinterpretations by pharmacists. These errors are common and have resulted in increased professional liability insurance premiums due to claims caused by prescription errors, which can and have resulted in disabilities and deaths. Consequently, it is imperative that a prescription slip be legibly written for the benefit of the pharmacist, who must read and dispense the appropriate medication, and for the benefit of the patient, who must read and take the appropriate amount of medication at the appropriate times. Because physicians cannot be made or required to write more legibly, there is a need for a device that will legibly print out a prescription which includes the appropriate type of medication, quantities, and dosages. According to its major aspects and broadly stated, the present invention is a hand-held micro-computer having a printer attached thereto that accepts input from a physician so that the input, combined with the medical information available in memory, prints a legible prescription slip. A prescription slip usually contains relevant information about the prescribed drug, including its strength, dosage, quantity, refill amount and whether a substitute is allowed. Additionally, the prescription slip will contain a line for the patient's name and the physician information including the physician's name, Drug Enforcement Agency (DEA) number, the physician's address, and a place for the physician to sign. It is also contemplated that the micro-computer can be used as a single unit with a main computer or as part of a larger group, where the group is supported by the main computer capable of storing all the information from each micro-computer. The micro-computer is preferably a hand-held, portable computer having a display, a printer, and a keypad that permits alphanumeric input. The computer also has read-only-memory (ROM) and random access memory (RAM) that can be used by the user to store medical and drug information, and in an alternative embodiment, patient information. For instance, the computer can store information about different drugs, including usual dosages and a specific ID number associated with a certain drug. Additionally, in an alternative embodiment, the computer can contain important information about a specific patient, including known reactions for certain drugs and medication previously prescribed. Furthermore, the computer may be able to upload and download information about drugs, physicians and potentially patients from a main system, including the above information. Consequently, updated information on drugs, physicians, and patients may potentially be maintained in the portable units. In use, it is contemplated that a physician will use the present invention to "write" a legible prescription slip for a patient. For this purpose the physician will log into the micro-computer, and then through a menu-driven program will enter the relevant information about the prescription. Once this information has been received and the microcomputer instructed to do so, the micro-computer will print out the prescription slip so that it may be signed by the physician. The micro-computer's menu will also enable a physician to reprint a prescription slip, possibly with modified drug information, and to potentially upload or download information to and from the microcomputer and the main computer. Additionally, it will be possible to for the physician to edit the drug database and physician database so that new information about each may be changed, updated, or removed from the main system. Furthermore, the main computer and possibly the microcomputer will have a utilities function that will enable the user to configure the computer or micro-computer to the desired settings and to perform other necessary functions. A major feature of the present invention is that the micro-computer prints out the prescription slip with all the relevant information. Therefore, other than the physician's signature and patient's name, all the information is in a typed format, thus legible to anyone who must read the slip. Consequently, the problem of not being able to read the physician's handwriting is alleviated. Another feature of the present invention is the ability of the micro-computer to store information about the drugs and in an alternative embodiment information about patients. Having all the necessary information about the specific drugs in front of the physician, including usual dosages, strengths, and amounts, when the physician is writing the prescription, will help prevent mistakes, but will more importantly provide the physician with a plethora of information at his or her fingertips. Additionally, in an alternative embodiment, by having specific information on patient's readily available, physician's will be notified of potential allergic reactions of the patient before prescribing a certain drug. Consequently, the micro-computer will check for conflicts between the patient's record and the prescribed drug and notify the physician if any are present. Still another feature of the present invention is the ability of the micro-computer to download and possibly upload information between a main computer. In this fashion a single or possibly several micro-computers can be used by a group of physicians, with each physician having the ability to use any one of the micro-computers. Furthermore, when it is necessary to update the relevant drug information carried by the micro-computers, this information can easily be downloaded to each unit. Yet another feature of the present invention is the incorporation of a password or other ID number for each physician. This prevents someone other than the physician's from prescribing medication with the microcomputers. Other features and advantages of the present invention will be apparent to those skilled in the art from a careful reading of the Detailed Description of a Preferred Embodiment presented below and accompanied by the drawings. In the drawings, FIG. 1 is a perspective view of a micro-computer and printer according to a preferred embodiment of the present invention; FIG. 2A is a schematic view of a flow chart for the micro-computer according to a preferred embodiment of the present invention; FIG. 2B is a schematic view of a flow chart for the main computer according to a preferred embodiment of the present invention; and FIG. 3 is a schematic view of a flow chart according to an alternative embodiment of the present invention. Referring now to FIG. 1, a micro-computer 10 and a printer 20 for the preparation of a prescription slip 30 according to the preferred embodiment of the present invention is shown. Prescription slip 30 contains information about the prescribed drug and the physician who prescribed the drug. The physician information on prescription slip 30 may include the physician's name, address, phone number, personal identification number, Drug Enforcement Agency (DEA) number, and a place for the physician to sign. The drug information 40 contained on prescription slip 30 may include the prescribed drug's name, the strength, the quantity, the dosage, the refill amount, and whether a substitute is allowed. Additionally, prescription slip 30 will have a place for the patient's name and a place for the date or a printed date. Micro-computer 10 is preferably a hand-held computer having both read-only-memory (ROM) and random-access-memory (RAM). The memory of micro-computer 10 is capable of storing a variety of information. In the preferred embodiment, the memory stores the drug information and the physician information. In an alternative embodiment, the memory may be able to store information on a variety of patients, including total or partial patient records or information on known drug allergies or other reactions. Printer 20 is operatively connected to micro-computer 10 and will print prescription slip 30 upon demand by a user. Printer 20 can be connected to micro-computer 10 by a variety of methods known to those skilled in the art, including, but not limited to male-female electric plugs, cables, or by radio communication. In other words, it may be possible for printer 20 to be located apart from micro-computer 10 yet still be in operative communication. Printer 20 holds a roll of paper 22 and prints information on paper 22 to form prescription slip 30. The information that is printed by printer 20 is controlled by the user through micro-computer 10. Those of ordinary skill in the art will recognize that a variety of hand-held micro-computers and printers may be used for the present system. The only requirement for micro-computer 10 is that it have the ability to store at least some drug information and be able to communicate to printer 20 so that prescription slip 30 can be printed. In the preferred embodiment, micro-computer 10 and printer 20 are used with a main computer 60 which maintains the drug and physician databases. These databases, in whole or in part, may be downloaded to micro-computer 10, so that a prescription for a specific drug may be printed from micro-computer 10 or printer 20. In another embodiment, micro-computer 10 and printer 20 are part of a larger system as shown in FIG. 3. This larger system comprises a main computer 60 and a number of micro-computers 10 and printers 20. Computer 60, as in the preferred embodiment, is capable of storing a relatively large amount of information as compared to micro-computer 10. For example, computer 60 may contain information on several thousands of prescription drugs, every patient of a specific physician group, or any other relevant information that would be kept in a physician practice group. Furthermore, computer 60 in the preferred embodiment is a personal computer or PC that is readily available in the marketplace and known to those of ordinary skill in the art. Micro-computer 10 and computer 60 have means for transferring information between each other. This downloading or uploading process can be conducted over a physical cable connection or by remote communication such as radio waves. It should be noted that there are numerous methods and devices not listed above for uploading and downloading information between two computers. Being able to transfer information allows updated drug, physician, and/or possibly patient information to be transferred and stored within memory of micro-computer 10. It will be recognized that a variety of additional information can be uploaded or downloaded between micro-computer 10 and computer 60 without departing from the spirit and scope of the present invention. In the use of micro-computer 10 and printer 20, it is contemplated that a physician will carry the device by hand to the patient's examining room, if the patient is visiting the physician's office. After diagnosing the patient's illness, the physician uses micro-computer 10 and printer 20 to prepare a prescription slip 30. Once prescription slip 30 has been prepared, the physician tears off prescription slip 30 from printer 20 and inserts the patient's name and then signs prescription slip 30 in the appropriate place. In the preparation of prescription slip 30, as shown in FIGS. 2A, the physician first enters a personal identification number (PIN), thus activating micro-computer 10 and identifying the specific physician who is using the device. Additionally, after access has been obtained into micro-computer 10, a menu-driven system allows the physician to choose from a list of options. In order to "write" a prescription, the physician selects the appropriate selection and identifies the prescription drug by drug ID number or by scrolling through a list of drug names. Once the drug name has been selected, various default drug information is available, including the nominal quantities, strengths, and dosages for the specific drug. If the physician is satisfied with the default drug information, the physician can proceed to enter additional information, including whether substitutions are allowed and the number of refills available to the patient. After this additional information is entered, the physician can print out the prescription slip 30 for his or her signature, or can return to the beginning to start over the procedure. Additionally, in the alternative embodiment, microcomputer 10 would compare the prescribed drug with the patient's records to determine if there is a conflict. In other words if the patient has a known reaction to the specific drug, the physician will be notified by micro-computer 10. If during the "writing" process, the physician wishes to change the default drug information, the physician will have that opportunity by selecting and entering the appropriate information through the menu-driven system on micro-computer 10. The information and selections can be entered and changed through the use of a keypad 12 and a display 14 on micro-computer 10. After the drug information has been modified as the physician wants it, the physician will enter information about a possible substitute and refill amount. Once the drug information has been entered and the physician approves, micro-computer 10 and printer 20 will print prescription slip 30 so that it may be signed by the physician. As with any other prescription slip, once the micro-computer generated slip is produced it can be taken by the patient to the pharmacist to be dispensed. Other functions can also be performed on micro-computer 10 and printer 20 by the physician or other medical personnel. For example, as shown in FIG. 2A, the physician may reprint a prescription or modify a previous prescription and then reprint that one. Furthermore, as stated above, micro-computer 10 can receive and transmit information from and to computer 60, such as physician information and drug information. In FIG. 2A and 2B, computer 60 is referred to as PC and micro-computer 10 is referred to as PCT. The transfer of information is especially important when there are drug information updates and when a new physician will be using micro-computer 10 and printer 20. Additionally, as shown in FIG. 2B, the users of computer 60 will be able to edit the drug database and a physician data base that is stored within computer 60. It is also possible to print the drug file and physician file stored within computer 60. The physician will also be able to perform other utility functions on computer 60. For instance, the physician will be able to backup databases, restore databases, format diskettes, re-index the system, select a printer type for computer 60, select a backup drive, and select a floppy disk type. Furthermore, under this heading the physician will be able to enter a facility name corresponding to the heading for prescription slip 30, showing the name, address, and phone number of the physician's practice. This will be downloaded to micro-computer 10 and printed on prescription slip 30. It will be recognized that computer 60 and micro-computer 10 can also be programmed to perform other relevant functions. In the other embodiment, where micro-computer 10 and printer 20 are a part of a larger group with computer 60, those of ordinary skill in the art will recognize that a large variety of information can be stored and transferred by the use of micro-computer 10 and computer 60. For instance, it may be possible that prescription information can be uploaded from micro-computer 10 to a patient's record contained on computer 60. Additionally, it may also be possible to track the quantity of a specific prescribed drug and which physician prescribed it. Those skilled in the art will recognize the additional memory requirement for micro-computer 10 that will be necessary as the quantity and difficulty of the functions performed by micro-computer increase. Furthermore, because there are no prescription pads that can easily be stolen with no record or method of determining the theft, it would be easy to track and catch unauthorized prescription slips 30. It will be apparent to those skilled in the art that many changes and substitutions can be made to the preferred embodiment herein described without departing from the spirit and scope of the present invention as defined by the appended claims.
Can you memorize a phone number for long enough to write it down? How about two phone numbers at the same time? Digit Span tests your ability to remember a sequence of numbers that appear on the screen, one at a time. When you hear a beep, click on the numbers you just saw, in order. If you correctly recall all of the numbers, then the next sequence will be one number longer. If you make a mistake, then the next sequence will be one number shorter. After three mistakes, the test will end. In this test: - Accuracy does matter; after three errors, the test ends. However, wrong answers do not subtract from your score, which is the maximum number of digits you correctly remember. - Speed does not matter. You have as long as you want to answer—but it might be hard to remember if you wait too long! So to get maximum points, pay careful attention and reach the longest string of digits you can possibly remember. - Your digit span can be increased with the right strategies. Experiment with your mental approach to the test to find strategies that work for you. - For most people, "chunking" is an effective strategy—instead of thinking about each digit separately, think of groups of digits that form a smaller number of meaningful units (chunks). - For example, instead of thinking about 1 4 2 8 5 7 as six digits, thinking of it as three numbers—14, 28, and 57—could make it easier to recall. It's not easy, and requires a lot of practice to master. Your score on this test contributes to: - Your verbal ability score (a lot). - Your short-term memory score (a bit). That's right, perhaps surprisingly, it's more closely related to verbal ability than to memory. The contribution of each test to each performance category is based on a "factor analysis" that looked at how tests tend to clump together when measuring a massive set of data. The results were published in Neuron in 2012 (Hampshire, Highfield, Parkin, & Owen, 2012). The exact contribution of each test to each performance category may change as more data is collected. The Science Behind Digit Span The science behind digit span reveals why it's associated more with verbal ability than short-term memory alone. Scientists refer to short-term memory, or working memory, as the cognitive system that allows the temporary storage and manipulation of information. According to one influential cognitive theory, this system has specialised components, one of which, the "phonological loop," underlies verbal working memory abilities (Baddeley & Hitch, 1974). The phonological loop comprises a verbal storage system and a rehearsal system. As you do this test, you may find yourself mentally rehearsing the string of digits as they appeared on screen; this is the rehearsal system in action. It allows the visual inputs to be recoded so that they can enter your short term verbal store, and it also refreshes decaying representations—without refreshing digits verbally, they would soon be forgotten. We have been studying how the brain remembers verbal information for nearly ten years. Our research has revealed that, while you are performing the digit span task, areas of your frontal cortex become activated. In one study (Owen et al, 2000), participants either had to recall digits in the order presented (forward recall), or in reverse order (backward recall), with backward being a much more demanding task. We found that both tasks engaged the mid-ventrolateral frontal cortex, but only when participants were recalling in reverse order did the mid-dorsalateral frontal cortex become activated. Both of these tasks required verbal working memory, yet different activation patterns were observed in the brain. On this basis, we concluded that frontal-lobe activity in this task relates to the type of memory process being performed (i.e., storage, reordering) and is not specific to the type of information that is being remembered (i.e., verbal memory). An average adult is thought to have a digit span of 7 items (plus or minus 2; Miller, 1956). As mentioned above, one of the best studied methods for improving verbal memory is through the use of "chunking" strategies, in which items are recoded into meaningful units or "chunks." In one study, by training a volunteer to use complex chunking strategies over the course of 20 months, scientists were able to increase digit span from 7 to a massive 79 items (Ericcson et al, 1980)! Our colleagues have studied the underlying brain activity involved in chunking. When recoding strategies were used to remember digit sequences, increased activation was observed in the lateral prefrontal and posterior parietal cortex. On this basis, we have hypothesised that this prefrontal-parietal network underlies strategic recoding in working memory (Bor et al., 2004, 2006). Digit Span in the Real World Verbal working memory is involved in many everyday tasks, from remembering a telephone number while you enter it into your phone, to understanding long and difficult sentences. Think about it; how could you understand a whole sentence if you couldn't remember the words at the beginning long enough to connect with the words at the end! Verbal working memory is also thought to be one of the elements underlying intelligence, so the digit span task is a common component of many IQ tests, including the widely used WAIS (Wechsler Adult Intelligence Scale). Performance on the digit span task is also closely linked to language learning abilities; improving your verbal memory capacity may therefore help you to master a new language or to expand your vocabulary. Some people have even made a sport out of increasing their digit span. Every year, the World Memory Championship tests how many digits can be remembered, in various types of games. In the 2015 competition, when given an hour to memorize digits, the current world record holder, Alex Mullen, recalled 3029 digits! Reaching 3029 digits is probably not feasible for the average person, but some strategies (like chunking, described above), alongside lifestyle optimization, can increase your digit span. Stress and exercise may not have an effect right away ; the results of studies looking at their immediate impact on Digit Span are inconsistent. Getting the right amount of sleep, however, may boost your scores right away. In one study (Sadeh, Gruber, & Raviv, 2003), children asked to increase their sleep by just one hour significantly increased their Digit Span performance. Try varying your own sleep schedule to see how Digit Span changes. - Baddeley, A. D., & Hitch, G. (1974). Working memory. In G. A. Bower (Ed.), Recent advances in learning and motivation, Vol. 8. New York: Academic Press. - Bor., D., Cumming, N., Scott, C. E. M., & Owen, A. M. (2004). Prefrontal cortical involvement in verbal encoding strategies. European Journal of Neuroscience, 19(12), 3365-3370. Read Abstract - Bor, D., & Owen, A. M. (2007). A common prefrontal-parietal network for mnemonic and mathematical recoding strategies within working memory. Cerebral Cortex, 17, 778-786. Download PDF - Ericcson, K. A., Chase, W. G., & Falloon, S. (1980). Acquisition of a memory skill. Science, 208, 181-1182. - Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review, 63, 81-97. - Owen, A. M., Lee, A. C. H., & Williams, E. J. (2000). Dissociating aspects of verbal working memory within the human frontal lobe: Further evidence for a 'process-specific' model of lateral frontal organization. Psychobiology, 28(2), 146-155. Read Abstract - Sadeh, A., Gruber, R., & Raviv, A. (2003). Effects of sleep restriction and extension on school-age children : What a difference an hour makes. Child Development, 74(2), 444-455. Download PDF
This informal study day will shine a light on Marie Neurath and her contemporaries: pioneering women who used design to revolutionise education. Leading experts in the field will consider the ways in which Neurath, Marion Richardson, Gwen White and Barbara Jones utilised illustration in the classroom to promote a more egalitarian and creative educational agenda. Barbara Jones' This and That in the context of the DIA and post-war design agendas – Joe Pearson of Design for Today Marie Neurath: designing for children – Professor Sue Walker of the University of Reading A History of Everyday Things in England: illustrated histories for children – Desdemona McCannon of Women in Print Dryad handicrafts: the leaflets that made making easy – Jane Audas Marion Richardson: writing as pattern – Dr Bryony Quinn Gwen White: A perspective on pattern – Kate Farley Important to know: Please bring warm layers as the South Gallery is kept at a cool temperature to conserve the artwork on display.
|Home | About | Journals | Submit | Contact Us | Français| Genomic imprinting is an epigenetic process that results in the preferential silencing of one of the two parental copies of a gene. Although the precise mechanisms by which genomic imprinting occurs are unknown, the tendency of imprinted genes to exist in chromosomal clusters suggests long-range regulation through shared regulatory elements. We characterize a 800-kb region on the distal end of mouse chromosome 7 that contains a cluster of four maternally expressed genes, H19, Mash2, Kvlqt1, and p57Kip2, as well as two paternally expressed genes, Igf2 and Ins2, and assess the expression and imprinting of Mash2, Kvlqt1, and p57Kip2 during development in embryonic and extraembryonic tissues. Unlike Igf2 and Ins2, which depend on H19 for their imprinting, Mash2, p57Kip2, and Kvlqt1 are unaffected by a deletion of the H19 gene region, suggesting that these more telomeric genes are not regulated by the mechanism that controls H19, Igf2, and Ins2. Mutations in human p57Kip2 have been implicated in Beckwith-Wiedemann syndrome, a disease that has also been associated with loss of imprinting of IGF2. We find, however, that a deletion of the gene has no effect on imprinting within the cluster. Surprisingly, the three maternally expressed genes are regulated very differently by DNA methylation; p57Kip2 is activated, Kvlqt1 is silenced, and Mash2 is unaffected in mice lacking DNA methyltransferase. We conclude that H19 is not a global regulator of imprinting on distal chromosome 7 and that the telomeric genes are imprinted by a separate mechanism(s). In mammals, a subset of genes are preferentially expressed according to their parent of origin. This phenomenon, variously termed genomic, parental, or gametic imprinting, has been shown for approximately 20 autosomal genes in mice and humans (4). A fundamental question about imprinting involves the mechanism used for distinguishing the maternal and paternal alleles of a gene. The leading candidate is DNA methylation that is established in different patterns in the male and female germ lines and is maintained throughout embryogenesis to regulate the imprinted state. There are other epigenetic differences between the parental alleles of imprinted genes, including differential sensitivity of chromatin to nuclease digestion, asynchronous replication, and differential frequencies of meiotic recombination (5, 13, 18, 24, 25, 39), but these are thought to be the consequences of the primary epigenetic mark, not the causes. A striking feature of imprinted genes is the frequency with which they are found in close proximity to another imprinted gene, often one that is imprinted in the opposite direction. Four clusters have been characterized, and each contains both maternally and paternally expressed genes (22, 23, 29, 37, 50, 56, 58, 61). The importance of clustering in imprinting remains unclear, but it suggests a role for a cis-regulatory element(s) that acts over a distance to permit the proper imprinting of genes in the cluster. In the case of Prader-Willi and Angelman syndromes, two human diseases that are associated with a cluster of imprinted genes on chromosome 15, deletions of a small region that spans the promoter of one of the paternally expressed genes, SNRPN, result in a disruption of the imprinting of genes hundreds of kilobases away. These observations imply the existence of an “imprint control element” acting on the entire cluster (8, 10, 49). Alternatively, clustering could arise if genes within an imprinted cluster interact functionally; for example, one gene could act in cis to silence a neighboring gene in much the same way that the Xist RNA is thought to be required for silencing the genes on the inactive X chromosome (36, 40). Finally, the integrity of imprinted clusters may also prove to be important for their regulation. In the case of another human disease associated with an imprinted gene cluster, Beckwith-Wiedemann syndrome (BWS), chromosomal rearrangements and translocations on chromosome 11 appear to be causative factors of the disease, in part by disrupting the imprinting of the insulin-like growth factor II (IGF2) gene (7, 20, 55). In mice, evidence for the importance of imprinted gene clustering comes from studies of H19 and Igf2. These genes lie on distal chromosome 7, in a region syntenic to human chromosome 11p15.5, and the maternal silencing of Igf2 requires the presence of the H19 gene 90 kb away (31, 44). The role of H19 in the silencing of Igf2 is thought to arise from its ability to compete with Igf2 for a common set of endoderm-specific enhancers located downstream of the H19 gene (31). On the maternal chromosome, H19, because of its position relative to the enhancers, prevents enhancer activation of Igf2. On the paternal chromosome, however, allele-specific methylation suppresses the H19 promoter, allowing activation of Igf2 transcription (5, 13, 32). A similar explanation involving promoter competition has now been offered for the imprinting of the Igf2r gene on mouse chromosome 17 (3, 58). In the last 3 years, several new imprinted genes have been mapped close to Igf2 and H19, including the placenta-specific gene Mash2 and the cyclin-dependent kinase inhibitor gene p57Kip2, both of which are maternally expressed (16, 19). A targeted disruption of Mash2 leads to embryonic death from placental failure in homozygous mutants and in heterozygous mutants inheriting the null allele maternally (17). Disruption of p57Kip2 leads to embryonic or early neonatal death when inherited in the same manner (60, 62). Interestingly, p57Kip2 mutant mice exhibit some features of BWS, including macroglossia and omphalocele. Recently, another maternally expressed imprinted gene, KvLQT1, has been identified on human chromosome 11p15.5. KvLQT1 is imprinted in most human fetal tissues in which it is expressed, except for the heart (28). Mutations in KvLQT1 cause long-QT syndrome, a heart defect that often leads to sudden death (53). Consistent with the lack of KvLQT1 imprinting in the heart, this syndrome is not inherited in a parent-of-origin-specific manner. This well-characterized cluster of imprinted genes provides an ideal opportunity to test experimentally the significance of linkage of imprinted genes. Toward that goal, we have generated a genetic and physical map of the region in the mouse, on which we have accurately placed eight genes. We show that a mutation at the H19 locus that disrupts imprinting of Igf2 and Ins2 has no effect on the imprinting of Mash2, Kvlqt1, and p57Kip2. Likewise, deletion of p57Kip2 does not affect the imprinting or expression of the other genes. In contrast, a mutation in the maintenance DNA methyltransferase gene (Dnmt) has different effects on imprinting depending on the gene in question. A total of 78 progeny of an interspecific (BTBR × M. spretus)F1 × BTBR backcross between Mus spretus and Mus domesticus were scored by PCR for the MIT markers D7Mit12 and D7Mit47 (Research Genetics). Restriction fragment length polymorphisms between the parental strains were detected with the 1.8-kb Mash2 cDNA fragment, a 2-kb SpeI p57Kip2 fragment, and a 4-kb EcoRI-SalI H19 fragment. Genomic DNA from the N2 progeny was digested with XbaI (Mash2 and p57Kip2) or SphI (H19), separated on a 1% agarose gel, transferred to a nitrocellulose membrane (Millipore), and hybridized to the appropriate radiolabeled fragment. The membranes were washed and visualized by autoradiography. The Princeton and MIT YAC libraries were screened by PCR with Mash2-specific primers 5′-CTC TAC GTC TCC GTC CCG-3′ (forward) and 5′-CCA CCA CGT GTC TCC CTT AC-3′ (reverse) and p57Kip2-specific primers 5′-GCC GGG TGA TGA GCT GGG AA-3′ (forward) and 5′-AGA GAG GCT GGT CCT TCA GC-3′ (reverse). The BAC library (Research Genetics) was screened by hybridization of radiolabeled probes to library filters and visualization by autoradiography. The probes were inverse PCR products for the left arms of the YACs FDK.D2, FDI.G4, and D43.H9, a 600-bp BamHI-NcoI fragment located 3′ of Mash2, the 1.8-kb p57Kip2 cDNA, the 2-kb Kvlqt1 cDNA, the 1.8-kb Tyrosine hydroxylase (Th) cDNA, and a 3-kb EcoRI-SalI fragment containing H19. To construct a physical map, the BAC DNAs were digested with rare-cutting restriction enzymes, separated on pulsed-field gels, transferred to nylon membranes (Hybond), and probed with the same probes used to screen the BAC library. BTBR mice were obtained from William Dove, and C57BL/6J and 129/Sv mice were purchased from the Jackson Laboratory. The BTBR(SPR H19-p57) congenic strain containing distal chromosome 7 sequences from M. spretus and a similar strain which was a hybrid of C57BL/6J and Mus castaneus, B6(CAST H19-p57), were created by continuous backcrossing of F1 hybrids to BTBR and C57BL/6J and selection, respectively, for M. spretus and M. castaneus alleles of H19 and p57Kip2. The Dnmt mutant mice harboring the s allele were obtained from R. Jaenisch (33), and the p57Kip2 mutant mice were obtained from S. Elledge (62). Dnmt genotyping was accomplished by a PCR that detected both the wild-type and targeted loci with primers 5′-CCT TCA GTG TGT ACT GCA GTC G-3′ (forward), 5′-AAT GAG ACC GGT GTC GAC AG-3′ (reverse), and 5′-CTT GTG TAG CGC CAA GTG C-3′ (reverse). A 20-μl reaction volume containing 100 ng of genomic DNA was prepared, and the conditions for amplification were 90°C for 30 s, 53°C for 30 s, and 72°C for 30 s for 35 cycles followed by 4 min at 72°C for 1 cycle. H19Δ13 genotyping used primers 5′-CAG TGT GGG AAA CAG CCT CG-3′ (forward) and 5′-CTT GTG TAG CGC CAA GTG C-3′ (reverse, same as the Dnmt genotyping primer) under the same conditions. p57Kip2 genotyping was performed as described by Zhang et al. (62). Total RNA was isolated from embryonic day 6.5 to 9.5 (e6.5 to e9.5) embryos and ectoplacental cones by guanidine thiocyanate extraction and from e12.5 embryos and fetal and adult organs by LiCl-urea extraction (1, 2). The RNA was treated with DNase I (Stratagene) for 30 min and then extracted with phenol-chloroform (1:1), precipitated with 2 volumes of ethanol, and reverse transcribed by use of Superscript II (Gibco/BRL) with oligo(dT) as the primer as specified by the manufacturer. Analogous reactions were performed without reverse transcriptase (RT) to control for DNA contamination. Imprinting of H19 in Dnmt−/− embryos was assayed by single-strand conformational polymorphism analysis as described previously (51). H19 and Igf2 expression in p57Kip2-deficient mice was detected by allele-specific RNase protection assays (6, 30). For Mash2, cDNA was amplified by PCR in the presence of [33P]dCTP with Mash2-specific primers spanning intron 2, 5′-TTA GGG GGC TAC TGA GCA TC-3′ (forward) and 5′-AAG TCC TGA TGC TGC AAG GT-3′ (reverse). The conditions for amplification were 94°C for 1 min, 55°C for 2 min, and 72°C for 2 min for 35 cycles followed by 4 min at 72°C for 1 cycle. The products were digested with BstNI for 1 h at 60°C and run on a 40-cm 8% acrylamide gel at 50 W for 2 h. The gel was dried and visualized by autoradiography on BioMax film (Kodak). CD81 cDNA was amplified by PCR with primers 5′-AGC CAT TGT GGT AGC TGT C-3′ (forward) and 5′-CAT TGA AGG CAT AAC AGG GCT TAC-3′ (reverse). The conditions for amplification were 94°C for 30 s, 55°C for 60 s, and 72°C for 90 s for 35 cycles followed by 4 min at 72°C for 1 cycle. The products were digested with RsaI for 1 h at 37°C and analyzed on a 10% polyacrylamide gel. Kvlqt1 cDNA was amplified by PCR with primers 5′-GAT CAC CAC CCT GTA CAT TGG-3′ (forward) and 5′-CCA GGA CTC ATC CCA TTA TCC-3′ (reverse). On the basis of the structure of the human gene (28), these primers amplify sequences that span four introns. The conditions for amplification were 94°C for 30 s, 55°C for 60 s, and 72°C for 90 s for 35 cycles followed by 4 min at 72°C for 1 cycle. The product was digested with PvuII for 1 h at 37°C and analyzed on a 10% 1× TBE polyacrylamide gel. p57Kip2 cDNA was amplified by PCR with primers spanning intron 2, 5′-TTC AGA TCT GAC CTC AGA CCC-3′ (forward) and 5′-AGT TCT CTT GCG CTT GGC-3′ (reverse). The conditions for amplification were 94°C for 1 min, 57°C for 2 min, and 72°C for 2 min for 35 cycles followed by 4 min at 72°C for 1 cycle. The products were digested with AvaI for 1 h at 37°C and analyzed on a 10% polyacrylamide gel. To characterize the imprinted domain at distal chromosome 7, we first constructed a genetic and physical map of the region. Previous linkage analysis had positioned p57Kip2 centromeric to H19 on mouse distal chromosome 7, analogous to the orientation of these genes on human chromosome 11p15.5 (19). To confirm this, we carried out a linkage analysis with 78 progeny of the interspecific backcross (BTBR × M. spretus)F1 × BTBR. In contrast to the previous report, our mapping places H19 at the centromeric end and p57Kip2 at the telomeric end of the cluster (Fig. (Fig.1A,1A, right). Our gene order is based on results with three recombinant animals, whereas the other study found only one such animal. The recombination frequencies (expressed as mean genetic distance in centimorgans [cM] ± standard error) are D7Mit12-2.6 ± 1.8-H19-2.6 ± 1.8-Mash2-1.3 ± 1.3-p57KIP2-0-D7Mit47. To determine the physical distances between the genes in the cluster, we used primers specific to p57Kip2 and Mash2 to screen the Princeton and MIT YAC libraries. Using both YAC ends and probes specific to p57Kip2, Mash2, and Kvlqt1, we also screened the Research Genetics BAC library. Restriction enzyme digest analysis of the resulting YAC and BAC clones placed H19 and p57Kip2 about 800 kb apart (Fig. (Fig.1B).1B). Three additional genes were localized to the region: Kvlqt1, CD81 (Tapa1), and Th. In addition, we determined the transcriptional orientation of each gene by restriction mapping with 5′ and 3′ gene-specific probes and found that all are transcribed toward the centromere, with the exception of Kvlqt1 and possibly CD81, whose current orientation is based on that of the human gene (Fig. (Fig.11). The physical distances between H19 and Mash2 (~250 kb) and Mash2 and p57Kip2 (~550 kb) are considerably shorter than those predicted from the genetic distances (~5 and 2.5 Mb), respectively. The presence of a high rate of recombination during female meiosis is unexpected, given studies with humans that suggested that female meiotic recombination is suppressed in imprinted regions (39). Whether this reflects a species difference or whether it is specific to the interspecific cross we analyzed is unknown. To determine the imprinting profile of p57Kip2, Kvlqt1, CD81, and Mash2 during development, we generated progeny from reciprocal crosses between strains of M. domesticus and BTBR(SPR H19-p57), a congenic BTBR strain containing sequence from M. spretus at distal chromosome 7. We identified polymorphisms between alleles from the two species and developed RT-PCR assays to determine which parental allele was expressed in the offspring. For each assay, mixing controls were used to verify that there was no allelic bias in amplification (data not shown). Furthermore, all assays were done with primers that spanned at least one intron, to eliminate the possibility of amplification of genomic DNA. As shown in Fig. Fig.2A,2A, Kvlqt1 is maternally expressed in extraembryonic tissues at all stages of development analyzed but begins to lose its imprint in embryos after e9.5. This finding is in contrast to studies of human KvLQT1, which showed imprinting in all fetal tissues tested except for the heart (28). Interestingly, there appears to be a 1-day difference in the acquisition of paternal Kvlqt1 expression in 129/Sv M. domesticus × BTBR(SPR H19-p57) and BTBR(SPR H19-p57) × 129/Sv M. domesticus hybrids, suggesting that the M. spretus allele is more readily activated by e9.5. To determine whether the expression pattern of Kvlqt1 in mouse embryos past e8.5 was skewed by biallelic expression in the heart, we compared the expression of Kvlqt1 in the heads with that in the bodies of e13.5 embryos derived from crosses between C57BL/6 mice and a B6(CAST H19-p57) congenic strain (Fig. (Fig.2B2B and data not shown). Fortuitously, the M. castaneus Kvlqt1 allele possesses the same polymorphism as the M. spretus allele. Our results confirmed that Kvlqt1 is biallelically expressed in both embryo heads and bodies at e13.5, suggesting that the biallelic expression we detected in whole embryos was not solely attributable to contamination from heart RNA. To rule out tissue-specific imprinting that would be obscured by examination of whole embryos, we examined RNAs isolated from tissues of 4-day-old neonates derived from C57BL/6 × B6(CAST H19-p57) reciprocal crosses. As shown in Fig. Fig.2B,2B, Kvlqt1 was biallelically expressed in all neonatal tissues examined. Thus, we conclude that mouse Kvlqt1 imprinting is specific to extraembryonic tissues except at early stages of development. Consistent with previous reports of p57Kip2 imprinting (19), we observed that p57Kip2 was imprinted at all developmental stages in both placental and embryonic tissues (Fig. (Fig.2A).2A). By in situ hybridization, Guillemot et al. (16) had observed the expression of paternal Mash2 mRNA in a fraction of trophoblast cells at e6.5 and e7.5, suggesting that the imprinting of Mash2 is temporally regulated. To confirm this finding in wild-type animals, where there would be no selective pressure for inappropriate Mash2 expression, we used allele-specific RT-PCR; however, we were unable to detect Mash2 mRNA at e6.5 (Fig. (Fig.2C,2C, lane 3). By e7.5, the paternal allele of Mash2 was silent when it was inherited from BTBR(SPR H19-p57) (lane 4). When paternal Mash2 was inherited from M. domesticus, however, its expression was still evident at e8.5 (lane 8) and was not extinguished until e9.5 (lanes 9 and 10). Thus, as suggested from the Mash2 gene disruption, Mash2 imprinting is developmentally acquired, at least when the paternal allele is derived from M. domesticus. Furthermore, the more rapid silencing of the M. spretus allele of Mash2 is an example of an allelic difference in the timing and expression of imprinting in mice. Prior studies of CD81 (35) and Th (63) gene disruptions in mice did not indicate a parent-of-origin phenotype typical of imprinted genes. Th homozygous mutant embryos die of cardiovascular failure between e11.5 and e15.5, whereas heterozygotes are fully viable. The CD81 mutant phenotype is a subtle delay in the humoral response of B cells, and hence its imprinting might have been overlooked. Therefore, we used RT-PCR to examine its imprinting during development. Although CD81 expression shows a strong maternal bias early in development, by e8.5, it is expressed well from both parental alleles in embryonic and extraembryonic tissues (Fig. (Fig.2A).2A). Thus, Mash2 is closely flanked by two predominantly nonimprinted genes. This observation is reminiscent of X chromosome inactivation in humans, where genes that escape inactivation are interspersed among genes that are inactivated (57). To date, L23MRP and NAP2, two nonimprinted genes that lie immediately telomeric and centromeric, respectively, to this region of human chromosome 11p15.5, have been viewed as defining the limits of the imprinting cluster (21, 52). The discovery of other nonimprinted genes embedded within the cluster, however, suggests that this notion should be reconsidered. Because Igf2 and Ins2 are known to depend on the H19 gene for their imprinting, we were interested in whether H19 could exert its effect further along distal chromosome 7. The distal genes are expressed on the same chromosome as H19, and therefore we would not expect a role for H19 in promoter competition with Mash2, Kvlqt1, and p57Kip2. However, there is a precedent in the Prader-Willi imprinted gene cluster for a deletion of SNRPN, a paternally expressed gene, affecting the expression of linked paternal genes many kilobases away (45). To address this question, we assayed the imprinting status of Mash2, Kvlqt1, and p57Kip2 in mice lacking the H19 gene plus 10 kb of its 5′-flanking DNA (H19Δ13) (30). As shown in Fig. Fig.3,3, all three genes showed a normal, imprinted expression profile in H19Δ13 heterozygous mice irrespective of whether the mutation was inherited maternally or paternally, suggesting that Mash2, Kvlqt1, and p57Kip2 are not regulated by H19. In humans, mutations in p57Kip2 have been associated with a minority of patients with BWS, and mice homozygous for an inherited p57Kip2 null allele have been put forward as a potential mouse model for the disease (27, 60, 62). Because the somatic overgrowth associated with BWS is often attributed to overexpression of Igf2 and because we have observed somatic overgrowth in e16.5 maternal heterozygous embryos (9a), we asked whether loss of p57Kip2 affected the expression or imprinting of Igf2 and/or H19. p57Kip2 heterozygous null mice (lacking exons 1 and 2 [87% of the coding region]), obtained from S. Elledge (Baylor College of Medicine), were crossed to B6(CAST H19-p57) mice to obtain e13.5 embryos inheriting the p57Kip2 null allele from either parent. With RNA derived from these embryos, we carried out allele-specific RNase protection assays to assess the effects on H19 and Igf2 RNAs, as well as RT-PCR analysis to examine the effects on imprinting and expression of Kvlqt1. As shown in Fig. Fig.4,4, H19, Igf2, and Kvlqt1 imprinting and expression are not affected by the p57Kip2 deletion when it is present on the maternal chromosome, the chromosome from which p57Kip2 is normally expressed. Furthermore, the deletion of p57Kip2 DNA encompassing exons 1 and 2 did not disrupt local imprinting, since transcripts of the neomycin resistance (Neor) gene that replaces p57Kip2 were detected only upon maternal inheritance (9a). Given these results, we conclude that loss of p57Kip2 function has no effect on imprinting and expression of other genes in the region. DNA methylation has different effects on imprinted genes. For H19 and Snrpn, methylation is required to maintain the silence of the genes, a finding that is consistent with the substantial methylation at their promoters (5, 13, 32, 46). In contrast, Igf2 and Igf2r, both of which are methylated on their expressed allele, are silenced in the absence of DNA methylation (32). It has been suggested that this classification of imprinted genes on the basis of the response to the loss of methylation is a useful way to distinguish genes that are the direct targets for DNA methylation (i.e., H19 and Snrpn) from those that are responding to methylation changes elsewhere (i.e., Igf2 and Igf2r) (3). To classify the telomeric genes on distal chromosome 7 with regard to their response to methylation, we analyzed their imprinting in mice lacking the DNA methyltransferase gene (Dnmt), whose product is responsible for the maintenance of methylation in the genome (33). To allow us to distinguish the parental alleles of the genes in question, we bred the Dnmts null allele onto a BTBR(SPR H19-p57) background. Dnmt−/− mice die just after e9.5. Therefore, we studied embryonic and extraembryonic tissues from pools of e9.5 progeny of reciprocal crosses between heterozygous Dnmt+/− and Dnmt+/− BTBR(SPR H19-p57) mice. As shown in Fig. Fig.5,5, in the absence of maintenance methylation, p57Kip2 is biallelically expressed (lanes 2, 4, 6, and 8), suggesting that DNA methylation is acting directly on p57Kip2 to repress its expression. In contrast, the maternal allele of Kvlqt1 is repressed in the absence of methylation (lanes 10, 12, 14, and 16), suggesting that this gene is an indirect target of DNA methylation and that methylation is required for Kvlqt1 expression. The most surprising observation of the methylation study was made when the imprinting of Mash2 was examined and found to be unaffected by the loss of DNA methylation. This result is most striking in [129/Sv × BTBR(SPR H19-p57)]F1 hybrids (Fig. (Fig.5;5; compare lanes 17 and 18). In [BTBR(SPR H19-p57) × 129/Sv]F1 wild-type hybrids, the expression of the gene exhibits a strong maternal bias by e9.5 whereas the Dnmt−/− embryos are biallelic (compare lanes 19 and 20). This difference in the two F1 hybrids most probably reflects the fact that the DNA methyltransferase mutants are developmentally delayed about 1 day at e9.5 (33). Thus, when the expression of Mash2 at e8.5 is used as the appropriate comparison (Fig. (Fig.2C,2C, lane 8), once again there is no impact of the Dnmt mutation. Given this finding, we wanted to confirm that DNA methylation had been affected in the Dnmt−/− embryos. Therefore, we used the same samples to examine the imprinting status of the H19 gene, which had previously been shown to become biallelic in the absence of Dnmt (32). As Fig. Fig.55 illustrates, H19 RNA was detected from both alleles (lanes 22, 24, 26, and 28), confirming that Dnmt-dependent methylation is reduced in these tissues. Thus, this experiment provides no evidence for methylation playing a role in regulating the imprinting of Mash2. Although the precise mechanisms by which imprinting occurs are unknown, the conserved localization of the imprinted genes on distal chromosome 7 in mice and humans suggests that clustering may be important for mechanistic or functional reasons. Our results show that the linkage of eight genes is conserved between mice and humans, consistent with the integrity of the region being important for proper imprinting of the genes contained therein (28, 41, 42). The synteny among imprinted genes in this region probably extends beyond the region. Recently, another maternally expressed imprinted gene, IPL/Ipl, has been characterized in humans and mice (41). In humans, this gene has been physically mapped centromeric to p57Kip2, and in mice, its genetic linkage places it in an analogous position. We have identified one major difference between the organization of this region in humans and mice in the positions of Th and CD81 relative to Mash2 and Ins2. In humans, TH is within 12 kb of INS (34), whereas in mice, the gene is just 25 kb centromeric of Mash2. In addition, a human P1 clone of the syntenic region of chromosome 11p15.5 (GenBank accession no. AC002536) places CD81 106 kb away from HASH2 (the human homolog of Mash2), whereas we detected CD81 sequences within 24 kb of Mash2. Another difference is the orientation of the cluster relative to the centromere. In humans, H19 is the most telomeric gene at 11p15.5, whereas our genetic analysis in mice places p57Kip2 closest to the telomere. For the most part, the imprinting of the genes in this cluster is conserved between humans and mice. One difference we uncovered is in the maintenance of imprinting of Kvlqt1 during embryogenesis. In humans, the gene is imprinted in all fetal tissues except the heart (28), whereas in mice, the imprint is lost in all neonatal tissues examined. Species-specific differences in imprinting have been detected for the Igf2r gene as well, but in that case imprinting is relaxed in humans (47, 59). Gene linkage has clearly been shown to be important for the imprinting of Igf2, H19, and Ins2. The mechanism is probably a transcriptional one, in which the genes require a common set of enhancers (31). DNA methylation on the paternal chromosome, the only epigenetic mark that has been identified, silences the H19 gene and thereby permits Igf2 and Ins2 expression (5, 13, 32). On the maternal chromosome, it is the position of the H19 gene, relative to the enhancers, that determines the preference for H19 transcription (54). This mechanism, however, does not extend to the telomeric genes in the cluster, since mutations that affect Igf2, H19 and Ins2 have no effect on these genes. Therefore, if a single element regulates distal chromosome 7 imprinting, that element does not appear to be the H19 gene. The most compelling evidence in favor of a mechanistic link between the imprinting of genes throughout this cluster comes from observations in human patients with BWS. Approximately 80% of BWS patients exhibit biallelic IGF2 expression, and overexpression of IGF2 is thought to be responsible for most of the BWS phenotype, particularly the somatic overgrowth (43). Two recent mouse models of BWS, in which overexpression of Igf2 is achieved through transgenesis or genetic manipulation, lend strong support to this conclusion (12, 48). Some BWS patients have chromosomal abnormalities including balanced translocations whose breakpoints map to two regions of chromosome 11p15.5 (20). The first cluster of breakpoints lies in the 3′ end of the KvLQT1 gene, and one patient with such a translocation was shown to exhibit biallelic IGF2 expression (7). If this finding holds up with other BWS translocation patients, it strongly suggests that IGF2 imprinting requires linkage not just to H19 but also to sequences downstream of KvLQT1. The other cluster of translocation breakpoints is at least 4 Mb centromeric to p57Kip2, but the allelic expression of IGF2 has not been examined in any of these patients. One reason for caution in interpreting the human translocations as implying a mechanistic linkage between the two domains of the cluster is that a small percentage of BWS patients have point mutations in the p57Kip2 gene itself (27, 38). It is unknown whether these rare patients display biallelic IGF2. If they do not, it is possible that the translocations are disrupting only p57Kip2 expression. As we have shown in this report, a loss-of-function mutation of p57Kip2 in mice does not result in biallelic Igf2 expression. The mice do exhibit some BWS-like symptoms, such as omphalocele, renal dysplasia, and adrenal cytomegaly, but they lack other features (60, 62). Thus, BWS is very likely to be a genetically complex disorder. Finally, there is indirect evidence for linkage between the genes in the cluster from studies of patients with Wilms’ tumor, where a general correlation between the expression of H19 and p57Kip2 has been observed (9). Since H19 does not appear to be the global regulator of imprinting of the telomeric genes, we considered the possibility that these genes are regulated by a common mechanism involving DNA methylation. By analogy to the paternally expressed genes in the Prader-Willi complex, which are coordinately expressed on the unmethylated paternal chromosome and silenced on the methylated maternal chromosome (for reviews, see references 15 and 26), we expected Mash2, Kvlqt1, and p57Kip2 to respond in the same way to the absence of DNA methylation. Instead, each gene responded differently. The imprinting of p57Kip2 in all tissues, coupled with the activation of its paternal allele in Dnmt−/− embryos, makes it a good candidate for a direct target of DNA methylation silencing. Indeed, Hatada and Mukai (19) had identified paternally specific methylation of a single HhaI site within the p57Kip2 gene itself. That site cannot be required for p57Kip2 imprinting, however, because it is deleted in p57Kip2 mutant mice, where the Neor gene retains imprinted expression (9a). Nevertheless, by analogy to other genes like H19 and Snrpn, our findings predict that there should be an imprint control region very close to the p57Kip2 gene. They also predict that the imprinting of p57Kip2 may not require the other genes in the cluster. Kvlqt1, on the other hand, exhibits characteristics of a gene that is an indirect target of methylation. Like Igf2 and Igf2r, the expression of the active allele is extinguished in Dnmt−/− embryos. By analogy to those genes, we would expect that there is a yet-to-be-identified paternally expressed transcript in the locus that competes with Kvlqt1 for expression in the placenta. It would be that gene whose expression is directly silenced by DNA methylation. This is the first suggestion that maternally specific methylation might exist at this cluster. An indirect mechanism for Kvlqt1 imprinting is also consistent with its tissue-specific imprinting. Tissue-specific imprinting can best be explained by considering the case of the Ins2 gene, which is imprinted in extraembryonic tissues but not in the pancreas (14). It has been proposed that the tissue specificity is a consequence of the position of transcriptional enhancers relative to the epigenetic mark at the H19 gene (4, 54). In extraembryonic tissues, Ins2 expression requires the same 3′ distal transcriptional enhancers that govern Igf2 and H19 expression, and thus its expression depends on the transcriptional status of the H19 gene. In the pancreas, an enhancer that lies 5′ of the gene is activated, and by virtue of its position, it escapes the influence of imprinting (11). For Kvlqt1, the target of the competition would be a placenta-specific enhancer. The gene whose imprinting does not fit into one of these two categories of imprinted genes is Mash2, which is imprinted and expressed only in the placenta but appears to be unaffected by a loss in DNA methylation. It could be that Mash2 needs only a small amount of methylation to be imprinted. Li et al. (32) had noted that the Igf2r gene was more resistant than H19 to demethylation in mice carrying a hypomorphic allele of Dnmt; however, the gene was affected in animals carrying a null allele. Furthermore, even in mice with a null allele of Dnmt, such as the animals we used in this study, there is residual genomic DNA methylation at a level approximately 5 to 10% of that in wild-type embryos (33). Thus, it is formally possible that another DNA methylase provides the signal for Mash2 imprinting. No differentially methylated sites associated with Mash2 have been detected to date, however (8a). Moreover, we have observed that a 105-kb P1 clone encompassing the Mash2 locus displays biallelic expression in transgenic mice, arguing against local controls governing its imprinting (8a). If methylation is not involved in Mash2 imprinting, we must invoke an entirely novel imprinting control mechanism, such as heritable changes in chromatin structure. In conclusion, our results with mice did not uncover long-range effects among the genes on distal chromosomes by known imprinting mechanisms as would be expected if the evolutionary conservation of the entire region is being maintained for regulatory reasons. Furthermore, a single mechanism whereby methylation spreads along the chromosome from a nucleating center can be argued against, since methylation is predicted to be on the paternal chromosome at p57Kip2, as it is for Igf2 and H19, but is expected to be on the maternal chromosome to affect Kvlqt1. The question that remains is whether there is any mechanistic link between p57Kip2, Kvlqt1, and Mash2 imprinting. Their common imprinting in the placenta is consistent with such a connection; however, the distinct ways in which they respond to the loss of DNA methylation cannot be readily reconciled. Thus, it is possible that distal chromosome 7 does not contain a single cluster of imprinted genes but, rather, contains multiple clusters, regulated by individual mechanisms. We thank Steve Elledge and Pumin Zhang, Baylor College of Medicine, for p57Kip2 mutant mice and the sequence of the p57Kip2 locus, and we thank Rudolph Jaenisch and En Li for the Dnmt mutant mice. We also thank R. S. Ingram for DNA sequencing, B. K. Jones for developing the Dnmt genotyping assay, and members of the laboratory for critical discussion. This work was supported by a grant from the National Institute for General Medical Sciences (GM 51460). T.C. and M.A.C. contributed equally to this work.
Creativity follows mastery, so mastery of skills is the first priority for young talent.— Benjamin Bloom The most inspiring Benjamin Bloom quotes to get the best of your day The normal curve is a distribution most appropriate to chance and random activity. Education is a purposeful activity and we seek to have students learn what we would teach. Therefore, if we are effective, the distribution of grades will be anything but a normal curve. In fact, a normal curve is evidence of our failure to teach. After forty years of intensive research on school learning in the United States as well as abroad, my major conclusion is: What any person in the world can learn, almost all persons can learn if provided with appropriate prior and current conditions of learning. Education must be increasingly concerned about the fullest development of all children and youth, and it will be the responsibility of the schools to seek learning conditions which will enable each individual to reach the highest level of learning possible. ...a student attains 'higher order thinking' when he no longer believes in right or wrong". "A large part of what we call good teaching is a teacher´s ability to obtain affective objectives by challenging the student's fixed beliefs. ...a large part of what we call teaching is that the teacher should be able to use education to reorganize a child's thoughts, attitudes, and feelings. We need to be much clearer about what we do and do not know so that we don't continually confuse the two. If I could have one wish for education, it would be the systematic ordering of our basic knowledge in such a way that what is known and true can be acted on, while what is superstition, fad, and myth can be recognized as such and used only when there is nothing else to support us in our frustration and despair. What any person in the world can learn, almost all persons can learn if provided with appropriate prior and current conditions of learning.
Since computers came into our lives, we really didn’t ask questions about how they got here or the process of naming the keys and symbols. But aren’t you curious about where the names and symbols came from? The Power Button Back in the 1940s, WWII engineers used the binary system to label individual power buttons, toggles and rotary switches: a 1 meant “on,” and a 0 meant off. In 1973, the International Electrotechnical Commission vaguely codified a broken circle with a line inside it as “standby power state,” and sticks to that story even now. The Institute of Electrical and Electronics Engineers, however, decided that was too vague, and altered the definition to simply mean power. The “At” Symbol It has been known by many names: the snail (France and Italy), the little mouse (China), the monkey’s tail (Germany). In 1971, a Bolt, Beranek & Newman programmer Raymond Tomlinson decided to insert the symbol between computer network addresses to separate the user from the terminal. Prior to Tomlinson’s use, the @ also graced the keyboard of the American Underwood in 1885 as an accounting shorthand symbol meaning “at the rate of.” Some also suggest that @ has its origins in the sixth century, when monks adopted it as a better way of wirting the word ad-Latin for “at” or “toward”-that was not so easily confused with AD, the designation for Anno Domini, or the the years after the death of Christ. Created as part of the USB 1.0 spec, the USB icon was drawn to resemble Neptune’s Trident, the mighty Dreizack. In lieu of the pointed triangles at the tip of the three-pronged spear, the USB Promoters decided to alter the shapes to a triangle, square and circle. This was done to signify all the different peripherals that could be attached using the standard. Back in 1995, a small group at Apple, the main developer of FireWire, set about designing a symbol that could accurately reflect the new technology they were working on. Originally intended as serial alternative to SCSI, FireWire’s main allure was that it promised high-speed connectivity for digital audio and video equipment. So designers opted for a symbol with three prongs, representing video, audio and data. Initially, the symbol was red, but was later altered to yellow for unknown reasons. Apple’s Command Symbol While working with other team members to translate menu commands directly to the keyboard, Hertzfeld and his team decided to add a special function key. The idea was simple: When pressed in combination with other keys, this “Apple key” would select the corresponding menu command. Jobs hated it-or more precisely the symbol used to represent the button-which was yet another picture of the Apple logo. Hertzfeld recalls his reaction: “There are too many Apples on the screen! It’s ridiculous! We’re taking the Apple logo in vain!” A hasty redesign followed, in which bitmap artist Susan Kare poured through in international symbol dictionary and settled on one floral symbol that in Sweden, indicated a noteworthy attraction in a campground. Alternately known as the Gorgon loop, the splat, the infinite loop, and, in the Unicode standard, a “place of interest sign,” the command symbol has remained a mainstay on Apple keyboards to this day. The Bluetooth symbol is actually a combination of the two runes that represent Harald’s initials. It just so happens the first Bluetooth receptor also had a “teeth-like” shape, and was-you guessed it-blue. But the symbolic interplay doesn’t end there. As the Bluetooth SIG notes, Bl??tand “was instrumental in uniting warring factions in parts of what are now Norway, Sweden, and Denmark – just as Bluetooth technology is designed to allow collaboration between differing industries such as the computing, mobile phone, and automotive markets.”
De-identification of health data has been crucial for all types of health research, but recent articles in medical and scientific literature have suggested that de-identification methods do not sufficiently protect the identities of individuals and can be easily reversed. A recent review conducted by researchers at CHEO entitled "A Systematic Review of Re-identification Attacks on Health Data" and published in PLoS ONE, did not uncover evidence to support this. "If re-identification rates were as high as some of these articles suggest, it would be worrisome," says lead author, Dr. Khaled El-Emam. "But our review did not support these claims there is no broad empirical support for a failure of anonymization." Such a failure would have significant policy implications. For example, it may become necessary to obtain patient consent before data is released (a time-consuming undertaking), incentive to de-identify would decline, and the likelihood of breaches would increase. For this reason, Dr. El-Emam and his team conducted a review that set out to characterize known re-identification attacks on health data and compare them to attacks on other types of data, calculate the number of records correctly identified in these attacks, and assess whether the results indicate a weakness in current de-identification methods. After identifying 14 relevant studies and analyzing them in detail, the group was unable to find convincing evidence that existing de-identification methods are not effective. Few of these attacks involved health data which is naturally protected more strenuously. Secondly, many of the attacks were on small databases with large confidence intervals around their success rates. Most importantly, the majority of re-identified data was not de-identified according to existing standards. "Of the 24 studies we examined, only six were attacks on health data and only one of these was de-identified according to standards," Dr. El-Emam points out. "In that particular study, the proportion of correctly re-identified records was very low: about 0.013%." In certain well-publicized re-identification attacks, adversaries were able to make use of such information as an individual's date of birth, gender, and residential zip code. Since these 3 features were not modified in any way, the database would not meet basic standards for de-identification. If anything, such a breach serves to underscore the importance of implementing existing de-identification standards. Dr. El-Emam concludes by saying that in order to have a more accurate picture of the extent to which de-identification protects against real attacks, future research on re-identification attacks should focus on large databases that have been de-identified according to existing standards, and that success rates should be correlated with how well de-identification was performed. In the meantime, it is suggested that data custodians continue to de-identify using current best practices. Explore further: New report on creating clinical public use microdata files Link to report: www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0028071
I read this book years ago, and would like to read it again.This is the best account I have found of the complex societies of the three different peoples living in the Iberian Peninsula in the Middle Ages (eleventh century). Christians, Jews and Muslims shared what is now modern Spain and Portugal. The divisions amongst the three were not so clear, since even those three groups were further fragmented with alliances being formed between Christians and Muslims so as to fight respective Christian and Muslim neighboring enemies.The contrast in sophistication, in particular between the Muslim and Christian societies was astounding, with the former clearly standing out. And in between, the Jewish community managed to keep a balance while associating easily with both.The plot emerges as the threads that three different characters, one from each of the three religions and civilizations weave together. The refined Muslim poet, the well respected Jewish doctor, and the more brutish Christian “hidalgo”, constitute the three primary colors if this well braided tapestry.The meeting point is represented by this Roman bridge close to the city of Toledo.Frank Baer was a German linguist. It is a shame he wrote very few books.
Conrad Ferdinand Meyer Conrad Ferdinand Meyer (11 October 1825 – 28 November 1898) was a Swiss poet and historical novelist, a master of realism chiefly remembered for stirring narrative ballads like "Die Füße im Feuer" (The Feet in the Fire). Meyer was born in Zürich. He was of patrician descent. His father, who died early, was a statesman and historian, while his mother was a highly cultured woman. Throughout his childhood two traits were observed that later characterized the man and the poet: he had a most scrupulous regard for neatness and cleanliness, and he lived and experienced more deeply in memory than in the immediate present. He suffered from bouts of mental illness, sometimes requiring hospitalization; his mother, similarly but more severely afflicted, killed herself. Having finished the gymnasium, he took up the study of law, but history and the humanities were of greater interest to him. He went for considerable periods to Lausanne, Geneva and Paris, and in Italy, where he interested himself in historical research. The two historians who influenced Meyer particularly were Louis Vulliemin at Lausanne and Jacob Burckhardt at Bâsle whose book on the Culture of the Renaissance stimulated his imagination and interest. From his travels in France and Italy (1857) Meyer derived much inspiration for the settings and characters of his historical novels. In 1875 he settled at Kilchberg, above Zürich. Meyer found his calling only late in life; for many years, being practically bilingual, he wavered between French and German. The Franco-Prussian War brought the final decision. In Meyer's novels, a great crisis often releases latent energies and precipitates a catastrophe. In the same manner, his own life which before the war had been one of dreaming and experimenting, was stirred to the very depths by the events of 1870. Meyer identified himself with the German cause, and as a manifesto of his sympathies published the little epic Hutten's Last Days in 1871. After that his works appeared in rapid succession. His works were collected into 8 volumes in 1912. The periods of the Renaissance and Counter Reformation furnished the subjects for most of his novels. Most of his plots spring from the deeper conflict between freedom and fate and culminate in a dramatic crisis in which the hero, in the face of a great temptation, loses his moral freedom and is forced to fulfill the higher law of destiny. - 1876 Jürg Jenatsch - Graubünden, Thirty Years War, a story of Switzerland in the 17th century through the conflict between Spain-Austria and France. The hero is a Protestant minister and fanatic patriot who, in his determination to preserve the independence of his little country, does not shrink from murder and treason and in whom noble and base motives are strangely blended. - 1891 Angela Borgia - Italian Renaissance Meyer's main works are historical novellas: - 1873 Das Amulett (The Amulet) - France during the St. Bartholomew's Day Massacre - 1878 Der Schuss von der Kanzel (The Shot from the Pulpit) - Switzerland - 1879 Der Heilige (The Saint) - Thomas Becket, Middle Ages, England - 1881 Plautus im Nonnenkloster (Plautus in the Nunnery) - Renaissance, Switzerland - 1882 Gustav Adolfs Page (Gustav Adolf's Page) - Thirty Years War - 1883 Das Leiden eines Knaben (The Suffering of a Boy) - France during reign of Louis XIV - 1884 Die Hochzeit des Mönchs (The Wedding of the Monk) - Italy, Dante himself is introduced at the court of Cangrande in Verona as narrator of the strange adventure of a monk who, after the death of his brother, is forced by his father to break his vows but who, instead of marrying the widow, falls in love with another young girl and runs blindly to his fate. - 1885 Die Richterin (The Judge) - Carolingian time, Grisons, introduces Charlemagne and his palace school - 1887 Die Versuchung des Pescara (The Temptation of Pescara) - Renaissance, Italy - tells of the great crisis in the life of Pescara, general of Charles V and husband of Victoria Colonna - 1867 Balladen - 1870 Romanzen und Bilder (Romances and pictures) - 1872 Huttens letzte Tage (Hutten's Last Days) - a short epic poem - 1873 Engelberg - 1882 Gedichte (Poems) It is as a master of narrative ballads, often on historical themes, that Meyer is mostly remembered. His fiction also typically focuses on key historical moments from the Middle Ages, the Reformation and Counter-Reformation. Meyer's lyric verse is almost entirely the product of his later years. He frequently celebrated human handiwork, especially works of art. Rome and the monumental work of Michelangelo were among decisive experiences in his life. ||This article includes a list of references, related reading or external links, but its sources remain unclear because it lacks inline citations. (July 2014) (Learn how and when to remove this template message)| - Friedrich Burns, ed., "A Book of German Lyrics" (Project Gutenberg) - Chisholm, Hugh, ed. (1911). "Meyer, Konrad Ferdinand". Encyclopædia Britannica (11th ed.). Cambridge University Press. - Reynolds, Francis J., ed. (1921). "Meyer, Conrad Ferdinand". Collier's New Encyclopedia. New York: P.F. Collier & Son Company. - This article incorporates text from a publication now in the public domain: Boucke, Ewald A. (1920). "Meyer, Conrad Ferdinand". In Rines, George Edwin. Encyclopedia Americana. - D'Harcourt, R., C. F. Meyer: Sa vie son œuvre (Paris, 1913) - Langmesser, A. Conrad Ferdinand Meyer: sein Leben, seine Werke und sein Nachlass (Berlin, 1905) - Frey, A. Conrad Ferdinand Meyer: sein Leben und seine Werke (Stuttgart, 1909) - Taylor, M. L., A Study of the Technique of C. F. Meyer's Novellen (Chicago, 1909) - Blaser, O., C. F. Meyer's Renaissance novellen (Berne, 1905) - Korrodi, E., C. F. Meyer: Studien (Leipzig, 1912)
Here along the beautiful and rugged California Central Coast, we experience the most spectacular sunsets and deepest minus tides during the winter months. On Thursday I joined a good friend for an outing to a favorite local spot for photographers – Montaña de Oro State Park just south of the sea hamlets of Los Osos and Morro Bay. Due to the prolonged rain we experienced in December, this planned shoot had been rescheduled numerous times. Finally, conditions were ripe to hike out to Hazard Canyon Reef and shoot the minus tide and sunset. For you Pixel Peepers, these were all Camera RAW bracketed shots processed in Photomatix Pro 4.2, Lightroom 4, and with onOne Software’s Perfect Photo Suite 6.1, which I love for post-processing and stylizing my images. All images were taken with my Nikon D800 and later processed for HDR. Montaña de Oro is Spanish for “Mountain of Gold” and is named for the golden wildflowers found in the park that bloom in the Spring. This gorgeous and very rugged State Park has 8,000 acres of rocky cliffs, secluded sandy beaches, coastal plains, streams, canyons, and hills, including 1,347 ft Valencia Peak. It also has many hiking, mountain biking, and equestrian trails, as well as a campground located across from Spooner’s Cove, a very popular beach. Naturalists and backpackers enjoy the solitude and freedom found along the park’s trails. Wildlife in the park includes black-tailed deer and the Black Oystercatcher. The Black Oystercatcher is a large, entirely black shorebird, with a long, bright red bill and pink legs. It has a bright yellow iris and a red eye-ring. Five hundred years ago, when Europeans first arrived on the California Central Coast, they found it inhabited by the Chumash Indians. An estimated 20,000 to 30,000 of them lived in small villages spread over a territory which extended from Morro Bay south to Malibu. Although the Chumash depended heavily upon the sea, they also drew on many other sources for food, clothing, and shelter, and were probably part of a large trading network. The Spanish Explorers who visited the Montaña de Oro area in 1542 recorded that the Indians were attractive, friendly people who paddled out to greet them in canoes. In 1769, Don Gaspar de Portola marched his troops north from San Diego to establish new territory for the king of Spain. With the beginning of the Mission period, the Indians were moved inland, and this was the beginning of the end for the Chumash. Most died from European diseases to which they had no immunity. The survivors abandoned their villages and disappeared. With them, their customs, heritage and culture all but vanished as well. Traces of Chumash middens (refuse mounds) and village sites can still be seen in the park, but knowledge of the Chumash culture remains sketchy. For this reason, and so that others may enjoy them, it is against the law to tamper with or disturb any Indian sites. On April 24, 1965, Rancho Montaña de Oro was dedicated as a California State Park after it was acquired in a “friendly” eminent domain proceeding under the Park acquisition program that then Governor Edmund G. “Pat” Brown had launched and managed to fund. The Rancho Montaña de Oro property was held by a corporation, Rancho Montaña de Oro, Inc., which was owned by the prominent Los Angeles trial and constitutional lawyer Morris Lavine and Irene M. Starkey. They had the options of developing the park land or preserving it as open space and in the public trust. They chose the latter, despite the fact that their financial gains were far less by doing so. Rancho Montaña de Oro, until recently, has had the longest uninterrupted, preserved and undeveloped coastal area of any publicly owned land in California. For more photographs of coastal sunsets and minus tides, see my prior blog post Winter’s Blessings on the California Central Coast. In honor of Valentine’s Day (although belated by a few hours), I wanted to share a photograph taken on New Year’s Eve at Shell Beach, California. As many of my friends know, I have been very busy getting set up and working as an Artist in Residence at Studios on the Park in Paso Robles. Therefore, processing new images has taken an unfortunate back seat to getting business permits, purchasing supplies, and readying work for the gallery. As any photographer knows, going any length of time without a major shoot is extremely painful, and my finger has been itching to get back to the shutter button. Fortunately I have plans for several new shoots. While most people were out partying on New Year’s Eve, I spent a quiet, delightful evening in Shell Beach (near Pismo Beach for you non-locals) watching the final sunset of 2011. I feel so blessed to live along the spectacular California Central Coast. This image was taken near the so-called “Three Palms Beach” during a wonderful winter minus tide. The tide pools reflected in the rocks were just gorgeous that last night of the year. Winter along the Central Coast brings the most colorful sunsets, along with deep minus tides, exposing rock formations, tide pool canyons, and exquisite sea creatures normally tucked away beneath the azure waters. This image is comprised of a series of five HDR bracketed exposures processed in Photomatix, Lightroom, Photoshop, and onOne Software. Of course they were captured in Camera Raw format. What a sweet ending to the year!
Scandium is a chemical element with symbol Sc, and atomic number 21. It is a silvery-white metallic transition metal, discovered in 1879 by Lars Fredrik Nilson and his team. He named it Scandium, from the Latin Scandia meaning “Scandinavia”. Scandium Chemistry 101: Atomic Number: 21 Group: Transition Metal Atomic weight: 44.9559 Density @ 293 K: 3.0 g/cm3 State (s, l, g): solid Melting point: 1812.2 K Boiling point: 3021 K Electron configuration: [Ar] 3d11 4s2 Crystal structure: Hexagonal During the Cold War, the Russians were the first to use Scandium with Aluminum alloys, and it was used for military endeavors such as the fins on ballistic missiles (for blasting through polar ice) and MiG fighter jets. Scandium is used as a grain refining additive for Aluminum. It enhances malleability, strength, integrity of welds, resistance to recrystallization and fatigue life of the aluminum. For bicycling applications a thinner walled and lighter tubing can be used. Most of the Scandium used in the US goes into high-intensity lights. Scandium is quite expensive, costing in the neighborhood of $120 per gram ($55,000 per pound). “Cocaine is like really evil coffee” My test stead Ibis Mojo takes a 31.6 seatpost, and it is accessorized in Red, so I had an easy choice to test, Red! Thanks to Jason at FairWheel Bikes for helping out with the review. Fairwheel Bikes out of Tucson Arizona not only carries some of the most tricked out weight weenie parts in the country, they are also the US KCNC distributor.
For decades, China has tried to bring Hong Kong closer into its orbit economically, politically, and physically. The city’s legislature is stacked with Beijing-friendly politicians; major multi-billion-dollar infrastructure projects link the city with the mainland, and mainland Chinese enterprises account for more than two-thirds of the market capitalization on the local stock exchange. But, by all accounts, China has so far failed at winning over the hearts and minds of the local populace. Data released yesterday (June 27) by the University of Hong Kong’s public polling unit confirms what many have seen on the streets in Hong Kong in recent weeks, as millions turned out to the streets against a planned extradition law: widespread distrust and disdain of Communist-ruled mainland China. Telephone poll results from a survey of 1,015 people showed the proportion of citizens identifying as “Hongkongers” reached a record high of 53%, while the proportion identifying as “Chinese” slumped to an all-time low of 11%, with the remaining 36% identifying as a mix of both. Other options for people’s choice of identity include “Asians,” “global citizens,” “members of the Chinese race,” all of which ranked higher than “Chinese” in terms of the strength and importance of that identity. Only “citizens of the People’s Republic of China” ranked lower. Hong Kong was handed over to Chinese sovereignty in 1997 from British rule, and since then has enjoyed a high degree of political and judicial independence under a setup known as “one country, two systems.” Citizens enjoy robust civil liberties that are unheard of in mainland China, but many say that Beijing has steadily eroded many of those liberties over the year. As China has tigtened its grip on Hong Kong, Hong Kongers’ sense of identity as distinct from that of their mainland Chinese counterparts has crystalized, revolving around key ideas of freedom and democracy, but also more intangible things like the Cantonese language. A recent column penned by a Hong Kong student at a US college—titled “I am from Hong Kong, not China“—and the ensuing backlash she faced from mainland Chinese students as a result, paints the battle lines clearly. On July 1, Hong Kong will mark the 22nd anniversary of its handover to China. The day, however, has become an annual opportunity for citizens to protest for democracy and freedoms, and this year may well see a record turnout following the momentum of the past few weeks. Already, protesters have co-opted imagery from Chinese Communist Party propaganda to urge people to hijack and disrupt the flag-raising ceremony, during which both the Chinese and Hong Kong flags are honored, and to drown out the Chinese national anthem (link in Chinese) by singing rival anthems from different countries.
Some objects in Python are subscriptable. This means that they contain, or can contain, other objects. Integers are not a subscriptable object. They are used to store whole numbers. If you treat an integer like a subscriptable object, an error will be raised. In this guide, we’re going to talk about the “typeerror: ‘int’ object is not subscriptable” error and why it is raised. We’ll walk through a code snippet with this problem to show how you can solve it in your code. Let’s begin! The Problem: typeerror: ‘int’ object is not subscriptable We’ll start by taking a look at our error message: typeerror: 'int' object is not subscriptable The first part of our error message, TypeError, states the type of our error. A TypeError is an error that is raised when you try to perform an operation on a value that does not support that operation. Concatenating a string and an integer, for instance, raises a TypeError. The second part of our message informs us of the cause. This message is telling us that we are treating an integer, which is a whole number, like a subscriptable object. Integers are not subscriptable objects. Only objects that contain other objects, like strings, lists, tuples, and dictionaries, are subscriptable. Let’s say you try to use indexing to access an item from a list: email_providers = ["Gmail", "Outlook", "ProtonMail"] print(email_providers) This code returns: ProtonMail. Lists are subscriptable which means you can use indexing to retrieve a value from a list. You cannot use this same syntax on a non-subscriptable value, like a float or an integer. An Example Scenario We’re going to write a program that asks a user for the date on which their next holiday starts and prints out each value on a separate line. This program will have an error that we can solve. Let’s start by writing our main program: holiday = int(input("When does your holiday begin? (mmddyyyy) ")) month = holiday[0:2] day = holiday[2:4] year = holiday[4:8] print("Month:", month) print("Day:", day) print("Year:", year) This program asks a user to insert the day on which their holiday begins using an input() statement. Then, we use slicing to retrieve the values of the month, day, and year that the user has specified. These values are stored in variables. Next, we print out the values of these variables to the console. Each value is given a label which states the part of the date to which the value corresponds. Let’s run our code: Traceback (most recent call last): File "main.py", line 3, in <module> month = holiday[0:1] TypeError: 'int' object is not subscriptable Let’s fix this error. We have converted the value of “holiday” into an integer. This means that we cannot access it using slicing or indexing. Integers are not indexed like strings. To solve this problem, we can remove the int() statement from our code. The input() statement returns a string value. We can slice this string value up using our code. Let’s revise our holiday = input("When does your holiday begin? (mmddyyyy) ") Now, let’s try to run our code: When does your holiday begin? (mmddyyyy) 02042021 Month: 02 Day: 04 Year: 2021 Our code runs successfully! We are no longer trying to slice an integer because our code does not contain an int() statement. Instead, “holiday” is stored as a string. This string is sliced using the slicing syntax. The “typeerror: ‘int’ object is not subscriptable” error is raised when you try to access an integer as if it were a subscriptable object, like a list or a dictionary. To solve this problem, make sure that you do not use slicing or indexing to access values in an integer. If you need to perform an operation only available to subscriptable objects, like slicing or indexing, you should convert your integer to a string or a list first. Now you’re ready to solve this Python TypeError like an expert! About us: Career Karma is a platform designed to help job seekers find, research, and connect with job training programs to advance their careers. Learn about the CK publication.
We have found compelling evidence for the existence of several sub-surface oceans in various places in our solar system. The most well-known of these bodies of liquid water is under the ice crust of Europa, a moon of Jupiter, with others located elsewhere. These oceans are logical places to look for signs of past or present extraterrestrial life. However, we have yet to obtain a sample of any of these oceans for analysis. It is time for that to change, but not without taking precautions to avoid damaging any such life, should it exist. What follows is my idea, freely available for anyone who wishes to use it, to safely obtain and analyze such samples. These ice-tunneling probes could be ejected from a larger lander, or simply dropped directly onto the surface from orbit. This would be far less expensive than any sort of manned interplanetary exploration. Exposure to vacuum and radiation, in space, would thoroughly sterilize the entire apparatus before it even lands, protecting anything which might be alive in the ocean underneath from contamination by organisms from Earth. In this cross-sectional diagram, the light blue area represents the ice crust of Europa, or another solar-system body like that moon. The ice-tunneling lander is shown in red, orange, black, yellow, and green. The dark blue area is the vertical tunnel created by the probe, shown shortly after tunneling begins. As the probe descends, the dome shown in gray caps the tunnel, and stays on the surface, having been previously stored, folded up, in the green section of the egg-shaped probe. The gray section is designed as a geodesic dome, with holes of adjustable size to allow heat to escape into space. An extendable, data-carrying tether connects the egg-shaped tunneling module to the surface dome. Solar-energy panels and radio transmitters and receivers stay at the surface, attached to the gray dome. The computers necessary to operate the entire probe are in the yellow section. The black section that extends outward, slightly, from the body of the tunneler would contain mechanisms to obtain samples of water for analysis. The orange section is where actual samples are stored and analyzed. The red part of the tunneler is weighted, so that gravity forces it to stay at the bottom. It is designed to heat up enough to melt the ice underneath it, allowing the entire “egg” to descend, attached to its tether. Water above the tunneling probe re-freezes, sealing the tunnel so that potentially-damaging holes are not left in the ice crust of Europa. The heating units in the red section can be turned on and off as needed, to slow, hasten, or stop the probe’s descent through the crust. Oceans in other places in the solar system might require certain adjustments to this design. For example, Ganymede, another moon of Jupiter, is far rockier than Europa. If this design were used on Ganymede, the tunneling probe would likely be stopped by sub-surface rocks. For this type of crust, the probe’s design could be modified to allow lateral movement of the tunneler, in order to go around rocks. On Europa, Ganymede, and elsewhere, one limitation of this design is imposed by the maximum length of the tether. We would not want to go all the way down to the subsurface oceans with the earliest of these probes, though. A better strategy would be to only tunnel part-way into the crust at first, capturing liquid samples of water before refreezing of the ice. After all, this ice in the crust could have been part of the lower, liquid ocean at some point in the past, and it should be analyzed thoroughly before heat-tunneling any deeper. The decision to make the tether long enough to go all the way through the crust, into the subsurface ocean itself, is not one to make lightly. It would be best to study what we find in molten crust-samples, first, before tunneling all the way through the protective crusts of these oceans.
U.S. workplaces may need to consider innovative methods to prevent fatigue from developing in employees who are obese. Based on results from a new study published in the Journal of Occupational and Environmental Hygiene (JOEH), workers who are obese may have significantly shorter endurance times when performing workplace tasks, compared with their non-obese counterparts. The study, conducted at Virginia Tech in Blacksburg, Va., examined the endurance of 32 individuals in four categories (non-obese young, obese young, non-obese older, and obese older) who completed three distinct tasks that involved a range of upper extremity demands—hand grip, intermittent shoulder elevation, and a simulated assembly operation. Each task involved periods of work and rest, and included pacing demands similar to those experienced by workers in manufacturing settings. "Our findings indicated that on average, approximately 40 percent shorter endurance times were found in the obese group, with the largest differences in the hand grip and simulated assembly tasks. During those tasks, individuals in the obese group also exhibited greater declines in task performance, though this difference was only evident among females," said Lora A. Cavuoto, PhD, an assistant professor in the department of industrial and systems engineering at the University at Buffalo, SUNY, in Buffalo, New York. In addition to examining how obesity affected physical demands and capacity, Cavuoto and her colleagues looked at the interactive effect of obesity and age on endurance times. "Previous studies have indicated that both age and obesity lead to decreased mobility, particularly when it comes to walking and performing lower extremity tasks. However, we found no evidence of an interactive effect of obesity and age on endurance times, which is contrary to previous findings," said Maury A. Nussbaum, PhD, a professor in the department of industrial and systems engineering at Virginia Tech, who also worked on the study. Obesity is associated with physiological changes at the muscular level, including a decrease in blood flow, thereby limiting the supply of oxygen and energy sources. When performing sustained contractions, these physiological changes may lead to a faster onset of muscle fatigue. The prevalence of obesity has doubled over the past three decades, and this increase has been associated with more healthcare costs, higher rates of workplace injury, and a greater number of lost workdays. According to Cavuoto and Nussbaum, the results from this and related studies will contribute to a better understanding of the ergonomic impacts of obesity and age, which is important for describing the link between personal factors and the risk of workplace injury. "Workers who are obese may need longer rest breaks to return to their initial state of muscle function. Based on the increased fatigue found among workers who are obese, workplace designers may need to consider adding fixtures and supports to minimize the amount of time that body mass segments need to be supported. We believe our results will help to develop more inclusive ergonomic guidelines," said Cavuoto. Explore further: 'Weightism' increases risk for becoming, staying obese Journal of Occupational and Environmental Hygiene DOI: 10.1080/15459624.2014.887848
National Infant Immunization Week National Infant Immunization Week is an annual observance to promote the benefits of immunizations and to improve the health of children two years old or younger. Since 1994, local and state health departments, national immunization partners, healthcare professionals, community leaders from across the United States, and the Centers for Disease Control and Prevention have worked together through National Infant Immunization Week to highlight the positive impact of vaccination on the lives of infants and children, and to call attention to immunization achievements. Observed from April 22 - 29, 2017, National Infant Immunizaiton Week will be celebrated as part of World Immunization Week, an initiative of the World Health Organization. During World Immunization Week, all six WHO regions, including more than 180 Member States, territories, and areas, will simultaneously promote immunization, advance equity in the use of vaccines and universal access to vaccination services, and enable cooperation on cross-border immunization activities. National Infant Immunization Week Sponsor: National Center for Immunization and Respiratory Diseases To learn more, visit http://www.cdc.gov
- over-consumption of hot and spicy foods - over-consumption of meat - regular alcohol consumption In traditional Chinese medicine the cause for “heat and fire in the Stomach” is over-consumption of “hot foods". (1) "Hot foods" are all spicy foods (with the exception of mint and its family, turnip, radishes), all meats (with the exception of pork), walnuts and some seeds. “Hot drinks" are coffee, wine, and spirits. Cigarettes are also hot in nature. Thus regular smokers, regular alcohol consumers, meat fans, and spicy over-eaters are likely to have "Stomach heat" or "Stomach fire". If the heat in the Stomach is not addressed it will eventually burn the fluids of the Stomach and lead to a condition known as Stomach Yin deficiency. Stomach Yin deficiency is rather chronic condition, while Stomach fire is rather acute. - dry mouth - Stomach yin deficiency symptoms - dry mouth and throat, dry stools, thirst with no desire to drink, lack of appetite, epigastric pain - Stomach fire symptoms - burning pain in the Stomach, constant unsatisfactory hunger, sour regurgitation, nausea, swollen, painful or bleeding gums, bad breath, thirst with desire for cold drinks As heat is drying in nature Stomach heat dries out the fluids of the Stomach manifesting in thirst, dry mouth, and constipation. In the cases of “Stomach Yin deficiency”, where the dryness has become chronic, there is dry mouth and throat (especially in the afternoon - the time governed by Yin), dry stools, and thirst with no desire to drink. There is no appetite and people often feel full after eating small amounts of food as there is not enough stomach juice to process the food. There is often epigastric pain and the chronic heat may manifest in some afternoon fever. (1) In the case of “Stomach fire” the Stomach pain is rather severe and burning. As the energy of “the fire” quickly engulfs/digests the food there is constant and unsatisfactory hunger. Since the nature of the fire is to flare upward the natural downward flow of the Stomach Qi becomes disrupted and rebellious, manifesting in sour regurgitation, nausea, and sometimes vomiting after eating. The Spleen/Stomach partnership opens into the mouth, thus Stomach fire manifests in swelling and pain in the gums, as well as bleeding gums. As the heat induces smell there is bad breath. The thirst is severe with desire for cold liquids. There is also constipation (the heat has parched the fluids). In both “Stomach yin deficiency” and “Stomach fire” the treatment principle is to cool Stomach heat and nourish the Stomach lining and stomach fluids. Heat causing foods and drinks mentioned in the Cause section need to be avoided. Foods with cool and moistening nature should be selected. To unlock the rest of this article select "Yes, I want to learn!" below. (1) Maciocia, Giovanni (1989). The Foundations of Chinese Medicine. Nanjing: Harcourt Publishers Limited (2) Pitchford, Paul (2002). Healing with Whole Foods. Berkeley: North Atlantic Books Please read our Disclaimer
Monarchies in Europe There are currently thirteen (13) sovereign monarchies in Europe: the Principality of Andorra, the Kingdom of Belgium, the Kingdom of Denmark, the Principality of Liechtenstein, the Grand Duchy of Luxembourg, the Principality of Monaco, the Kingdom of the Netherlands, the Kingdom of Norway, the Kingdom of Spain, the Kingdom of Sweden, the Sovereign Military Order of Malta, the United Kingdom of Great Britain and Northern Ireland and the State of the Vatican City. Ten of these are states where the head of state (a monarch) inherits his or her office, and usually keeps it for life or until they abdicate. As for the other two: in the Vatican City (an elective monarchy, styled as an absolute theocracy), the head of state, the Sovereign (who is a Pope), is elected at the papal conclave, while in Andorra (technically a semi-elective diarchy), the joint heads of state are the elected President of France and the Bishop of Urgell, appointed by the Pope. Most of the monarchies in Europe are constitutional monarchies, which means that the monarch does not influence the politics of the state: either the monarch is legally prohibited from doing so, or the monarch does not utilize the political powers vested in the office by convention. The exceptions are Liechtenstein, which is usually considered a semi-constitutional monarchy due to the large influence the prince still has on politics, and the Vatican City, which is a theocratic absolute elective monarchy. There is currently no major campaign to abolish the monarchy (see monarchism and republicanism) in any of the twelve states, although there is a significant minority of republicans in many of them (e.g. the political organisation Republic in the United Kingdom). Currently seven of the twelve monarchies are members of the European Union: Belgium, Denmark, Luxembourg, the Netherlands, Spain, Sweden and the United Kingdom. At the start of the 20th century, France, Switzerland and San Marino were the only European nations to have a republican form of government. The ascent of republicanism to the political mainstream started only at the beginning of the 20th century, facilitated by the toppling of various European monarchies through war or revolution; as at the beginning of the 21st century, most of the states in Europe are republics with either a directly or indirectly elected head of state. - 1 Current monarchies - 2 Succession laws - 3 Table of monarchies in Europe - 4 Calls for abolition - 5 See also - 6 References - 7 Further reading Andorra has been a co-principality since the signing of a paréage in 1278, when the count of Foix and the bishop of La Seu d'Urgell agreed to share sovereignty over the landlocked country. After the title of the count of Foix had been passed to the kings of Navarre, and after Henry of Navarre had become Henry IV of France, an edict was issued in 1607 which established the French head of state as the legal successor to the count of Foix in regard to the paréage. Andorra was annexed by the First French Empire together with Catalonia in 1812–1813. After the Empire's demise, Andorra became independent again. The current joint monarchs are Bishop Joan Enric Vives Sicília and President François Hollande of France. Belgium has been a kingdom since 21 July 1831 without interruption, after it became independent from the United Kingdom of the Netherlands with Léopold I as its first king. Belgium is the only remaining popular monarchy in the world: The monarch is formally known as the "King of the Belgians", not the "King of Belgium". While in a referendum held on 12 March 1950, 57.68 per cent of the Belgians voted in favor of allowing Léopold III, whose conduct during World War II had been considered questionable and who had been accused of treason, to return to the throne; due to civil unrest, he opted to abdicate in favor of his son Baudouin I on 16 July 1951. The current monarch is Philippe. In Denmark, the monarchy goes back to the prehistoric times of the legendary kings, before the 10th century. Currently, about 80 per cent support keeping the monarchy. The current monarch is Margrethe II. The Danish monarchy also includes the Faroe Islands and Greenland which are parts of the Kingdom of Denmark with internal home rule. Due to this status, the monarch has no separate title for these regions. Liechtenstein formally came into existence on 23 January 1719, when Charles VI, Holy Roman Emperor decreed the lordship of Schellenberg and the countship of Vaduz united and raised to the dignity of a principality. Liechtenstein was a part of the Holy Roman Empire until the Treaty of Pressburg was signed on 26 December 1805; this marked Liechtenstein's formal independence, though it was a member of the Confederation of the Rhine and the German Confederation afterwards. While Liechtenstein was still closely aligned with Austria-Hungary until World War I, it realigned its politics and its customs and monetary institutions with Switzerland instead. Having been a constitutional monarchy since 1921, Hans-Adam II demanded more influence in Liechtenstein's politics in the early 21st century, which he was granted in a referendum held on 16 March 2003, effectively making Liechtenstein a semi-constitutional monarchy again. However, the constitutional changes also provide for the possibility of a referendum to abolish the monarchy entirely. The current monarch is Hans-Adam II, who turned over the day-to-day governing decisions to his son and heir Alois, Hereditary Prince of Liechtenstein on 15 August 2004. Luxembourg has been an independent grand duchy since 9 June 1815. Originally, Luxembourg was in personal union with the United Kingdom of the Netherlands and the Kingdom of the Netherlands from 16 March 1815 until 23 November 1890. While Wilhelmina succeeded Willem III in the Netherlands, this was not possible in Luxembourg due to the order of succession being based on Salic law at that time; he was succeeded instead by Adolphe. In a referendum held on 28 September 1919, 80.34 per cent voted in favor of keeping the monarchy. The current monarch is Henri. Monaco has been ruled by the House of Grimaldi since 1297. From 1793 until 1814, Monaco was under French control; the Congress of Vienna designated Monaco as being a protectorate of the Kingdom of Sardinia from 1815 until 1860, when the Treaty of Turin ceded the surrounding counties of Nice and Savoy to France. Menton and Roquebrune-Cap-Martin, part of Monaco until the mid-19th century before seceding in hopes of being annexed by Sardinia, were ceded to France in exchange for 4,000,000 French francs with the Franco-Monegasque Treaty in 1861, which also formally guaranteed Monaco its independence. Until 2002, Monaco would have become part of France had the house of Grimaldi ever died out; in a treaty signed that year, the two nations agreed that Monaco would remain independent even in such a case. The current monarch is Albert II. The Netherlands originally became independent as the Republic of the Seven United Netherlands, which lasted from 26 July 1581 until 18 January 1795, when the Netherlands became a French puppet state as the Batavian Republic. The Batavian Republic existed from 19 January 1795 until 4 June 1806. It was transformed into the Kingdom of Holland on 5 June 1806; since then, the Netherlands have been a kingdom. They were subsequently annexed to the French Empire in 1810. The United Kingdom of the Netherlands was established on 16 March 1815. With the independence of Belgium on 21 July 1831, the Netherlands again took a new form, as the Kingdom of the Netherlands. Nowadays, about 70 to 80 per cent of the Dutch are in favor of keeping the monarchy. The current monarch is Willem-Alexander. Norway was united and independent for the first time in 872, as a kingdom. It is thus one of the oldest monarchies in the world, along with the Swedish and Danish ones. Norway was part of the Kalmar Union from 1397 until 1524, then part of Denmark–Norway from 1536 until 1814, and finally part of the Union between Sweden and Norway from 1814 until 1905. Norway became completely independent again on 7 June 1905. Support for establishing a republic lies around 20 per cent. The current monarch is Harald V. Spain came into existence as a single, united kingdom under Charles I of Spain on 23 January 1516. The monarchy was briefly abolished by the First Spanish Republic from 11 February 1873 until 29 December 1874. The monarchy was abolished again on 14 April 1931, first by the Second Spanish Republic – which lasted until 1 April 1939 – and subsequently by the dictatorship of Francisco Franco, who ruled until his death on 20 November 1975. Monarchy was restored on 22 November 1975 under Juan Carlos I, who was also the monarch until is abdication in 2014. His son Felipe VI is the current monarch. Today, there is a large number of organisations campaigning in favor of establishing a Third Spanish Republic; Data from 2006 suggest that only 25 per cent of Spaniards are in favor of establishing a republic, however, the numbers have increased since Juan Carlos I abdicated. Sweden’s monarchy goes back as far as the Danish one, to the semi–legendary kings before the 10th century, since then it has not been interrupted. However, the unification of the rivalling kingdoms Svealand and Götaland (consolidation of Sweden) did not occur until some time later, possibly in the early 11th century. The current royal family, the House of Bernadotte, has reigned since 1818. The current monarch is Carl XVI Gustaf. The monarchy of the United Kingdom of Great Britain and Northern Ireland can be defined to have started either with the Kingdoms of England (871) or Scotland (843), with the Union of the Crowns on 24 March 1603, or with the Acts of Union of 1 May 1707. It was briefly interrupted by the English Interregnum, with the Commonwealth of England existing in its stead from 30 January 1649 until 15 December 1653 and from 26 May 1659 until 25 May 1660 and The Protectorate taking its place from 16 December 1653 until 25 May 1659. The current monarch is Elizabeth II. Support for establishing a republic instead of a monarchy was around 18 per cent in the United Kingdom in 2006, while a majority thinks that there will still be a monarchy in the United Kingdom in ten years' time, public opinion is rather uncertain about a monarchy still existing in fifty years and a clear majority believes that the monarchy will no longer exist a century after the poll. Public opinion is, however, certain that the monarchy will still exist in thirty years. About 30 per cent are in favour of discontinuing the monarchy after Elizabeth's death. The monarch of the United Kingdom is also the monarch of the fifteen other Commonwealth realms, none of which are in Europe. Some of these realms have significant levels of support for republicanism. Differently from the Holy See, in existence for almost two thousand years, the Vatican City was not a sovereign state until the 20th century. In the 19th century the annexation of the Papal States by the Kingdom of Sardinia, and the subsequent establishment of the Kingdom of Italy, was not recognized by the Vatican. However, by the Lateran Treaty of 1929, the Kingdom of Italy recognized Vatican City as an independent city state, and vice versa. Since then, the elected monarch of the Vatican City state has been the current pope. The pope still officially carries the title "King of the Ecclesiastical State" (in Latin: Rex Status Ecclesiæ). |This section needs to be updated. (September 2015)| The succession order is determined by primogeniture in most European monarchies. Belgium, Denmark, Luxembourg, the Netherlands, Sweden and the United Kingdom now adhere to absolute primogeniture, whereby the eldest child inherits the throne, regardless of gender; Monaco and Spain have the older system of male-preference primogeniture, while Liechtenstein uses agnatic primogeniture. Norway will adopt absolute primogeniture for the grandchildren of King Harald V, but his second child and only son, Crown Prince Haakon remains the heir apparent over his older sister, Princess Märtha Louise. There are plans to change to absolute primogeniture in Spain through a rather complicated process, as the change entails a constitutional amendment. Two successive parliaments will have to pass the law by a two-thirds majority and then put it to a referendum. As parliament has to be dissolved and new elections have to be called after the constitutional amendment is passed for the first time, the previous Presidente del Gobierno José Luis Rodríguez Zapatero indicated he would wait until the end of his first term in 2008 before passing the law, although this deadline passed without the referendum being called. The amendment enjoys strong public support. To change the order of succession in the United Kingdom, as the Queen of the United Kingdom is also the queen of the fifteen other Commonwealth realms, a change had to be agreed and made by all of the Commonwealth realms together. In the United Kingdom, the Succession to the Crown Act 2013 was enacted, and after completion of the legislative alterations required in some other realms, the changes came into effect across the Commonwealth realms on 26 March 2015. Liechtenstein uses agnatic primogeniture (aka Salic law), which completely excludes women from the order of succession unless there are no male heirs of any kind present, and was criticised for this by a United Nations committee for this perceived gender equality issue in November 2007. Luxembourg also used agnatic primogeniture until 20 June 2011, when absolute primogeniture was introduced. Table of monarchies in Europe Calls for abolition Due to the ongoing economic crisis in Europe beginning in 2008, the value of monarchies, and especially of the civil lists or appanages allocated to some members of reigning families (not just the sovereign and consort) have come under increased scrutiny by members of the citizenry. Some taxpayers object to these endowments, in their entirety or in part, as in some cases members of dynasties draw hundreds of thousands or millions of euros from national coffers per year, depending on the family member in question. Others express concern that during a period of rising inequality of wealth and, in some cases, growing poverty, royalty should receive no allowances, accept cuts, or pay increased taxes. Organisations which actively campaign to eliminate one or more of Europe's ten remaining hereditary constitutional monarchies and/or to liquidate assets reserved for reigning families, include Alliance of European Republican Movements, Republic in the United Kingdom and Hetis2013. Also, some political parties (e.g. Podemos in Spain) have stepped up and called for national referenda to abolish monarchies. - List of European Union member states by political system - Monarchies in the Americas - Monarchies in Oceania - Monarchies in Africa - Monarchies in Asia - United States Department of State – Under Secretary of State for Public Diplomacy and Public Affairs – Bureau of Public Affairs. "Background Note: Andorra". Retrieved 12 September 2009. - european navigator (20 June 2006). "Full list of the results of the referendum on the issue of the monarchy (13 March 1950)". Historical events – 1945–1949 The pioneering phase. Retrieved 28 June 2006. - Staff writer (12 May 2004). "Republicans plan to cut Mary's reign". The Age. Australia. Retrieved 27 June 2006. - United States Department of State – Under Secretary of State for Public Diplomacy and Public Affairs – Bureau of Public Affairs. "Background Note: Liechtenstein". Retrieved 12 September 2009. - Foreign and Commonwealth Office. "Country Profile: Liechtenstein". Archived from the original on 25 May 2011. Retrieved 25 November 2009. - Fayot, Ben (October 2005). "Les quartres référendums du Grand-Duché de Luxembourg" (PDF) (in French). Luxembourg Socialist Workers' Party. Retrieved 3 August 2007. - United States Department of State – Under Secretary of State for Public Diplomacy and Public Affairs – Bureau of Public Affairs. "Background Note: Monaco". Retrieved 12 September 2009. - Netty Nynke Leistra (29 February 2004). "Royal News: March 2003". Retrieved 27 June 2006. - Angus Reid (14 May 2008). "Most Dutch Content with Monarchy". Angus Reid Global Monitor: Polls & Research. Retrieved 14 April 2013. - Berglund, Nina (5 November 2005). "Monarchy losing support". Aftenposten. Archived from the original on 29 May 2006. Retrieved 4 April 2007. - Staff writer (1 December 2003). "Spain wants to be a Republic, again". Pravda. Retrieved 28 June 2006. - Angus Reid (14 October 2006). "Spaniards Content with Monarchy". Angus Reid Global Monitor: Polls & Research. Retrieved 14 April 2013. - Douwe Keulen, Jan (5 June 2014). "The call for a third Spanish republic". Al Jazeera. Retrieved 16 July 2014. - Ipsos MORI (22 April 2006). "Monarchy Trends". Retrieved 27 June 2006. - Staff writer (7 November 1999). "Where the queen still rules". The Guardian. UK. Retrieved 30 June 2006. - United States Department of State – Under Secretary of State for Public Diplomacy and Public Affairs – Bureau of Public Affairs. "Background Note: Holy See". Retrieved 12 September 2009. - Fordham, Alive (8 November 2005). "War of Spanish succession looms while baby sleeps". The Times. UK. Retrieved 29 June 2006. - Tarvainen, Sinikka (26 September 2006). "Royal pregnancy poses political dilemma for Spain". Monsters and Critics. Retrieved 27 September 2006. - Angus Reid (21 October 2006). "Spaniards Support Monarchy Amendment". Angus Reid Global Monitor: Polls & Research. Retrieved 14 April 2013. - Pancevski, Bojan (19 November 2007). "No princesses: it's men only on this throne". The Times. UK. Retrieved 23 November 2007. - Staff writer (21 June 2011). "New Ducal succession rights for Grand Duchy". Luxemburger Wort. Retrieved 21 June 2011. - Endowment of Dutch royal family members - Endowment for Belgian royal family members - EU economic crisis causing massive rise in poverty - Podemos calling upon referendum, ref 1 - Podemos calling upon referendum, ref 2
Cut 16 diagonal slices (1/4-inch thick) from carrot. To make beaks, cut a small triangle from edge of each carrot slice; set aside. Place the carrot slices for the feet in small microwavable bowl. Cover; microwave on High 30 seconds. Uncover; set aside. For head of each penguin, make small hole in each of 16 meatballs to hold beak; push skewer through meatball, starting at top of head. To place beak, insert carrot triangle into small hole of each meatball, inserting tip of skewer into carrot to secure in place.
Offered to you by RoboHouse & Pilz A course for technical personnel tasked with supervision of robot systems, machine designers and robot controllers, maintenance personnel, HSE managers, engineering managers and project engineers in production environments. - Basics of robot safety from ISO 10218-2 and ISO/TS 15066 norms - Execution of a risk assessment and risk reducing measures on cobots based on an example case - Use and necessity of validation (both physical and calculating validity of safety systems) - Basics of practical impact measurements based on the limit values of ISO/TS 15066 Work processes are very quickly becoming more efficient through humans and machines working together in much closer cooperation. The trend of the last couple of years is that besides monotonous tasks also complex tasks can be executed by robots. Traditionally robots are isolated and safe-guarded with fences. Now that robot work together with humans to a smaller or bigger extend, we call them cobots. The risks caused by this Human-Robot Collaboration (HRC) have to be addressed. How does that work? That’ s what you will learn during this training day! From example, we learn that purely purchasing a robot made for HRC is not sufficient. In the training “Safe Human-Robot cooperation” all the applications of a safe cobot are discussed, such as: the CE marking, making a risk assessment and the validation of cobot applications. Besides that, the topics of the ISO 10218-2 norm and technical specification ISO/TR 15066 are discussed. This includes the different types of interaction, the four methods that can be used to safe guard the interaction between human and robots and the steps to get to a safe human robot cooperation.
Tree Fungus is a common ailment for trees. When fungal spores come in contact with a susceptible host they begin to grow, enter, and feed on the tree or shrub. Not all fungi growing on your tree are harmful; some do not affect the tree at all while others are even beneficial. It’s best to have an arborist diagnose what type of fungus is growing on your tree. The arborist will be able to let you know if the fungus is harmful and be able to recommend appropriate treatments. How A Tree Fungus Spreads: Tree fungi produce spores that spread and infect other trees or shrubs. Spores spread through: - the air on windy days - hard rains that splash the spores up onto trunks and leaves - gardening tools - Human movement; for example, walking through wet diseases plants then walking through healthy plants that aren’t yet infected. Signs Of A Fungal Disease: You may see mushrooms or other types of fungi growing on or around your tree if you have a fungal disease. However; many times the tree fungus may not appear above ground or many have a different appearance than you would expect. The symptoms you see will depend on what type of tree fungus is attacking your tree. In most cases being infected with a tree fungus will result in loss of vigor and discoloration or wilting of leaves. Tree Fungus Treatments: Once infected with a tree fungus your tree or shrub can never be fully cured. However; it can be treated. Our arborist will recommend a plan to suppress the tree fungus. This will stop the disease from getting worse and to restore your tree’s health and vigor. If the fungus is too far developed, the arborist may recommend removing the tree/shrub and replacing it with a fungi resistant species. Prevention is key when it comes to fungus. To prevent infection: - Don’t over water - Make sure your soil drains properly - Boost overall health with proper maintenance - Sanitize gardening tools between plants - Rake and remove falling leaves from your yard - Use preventative fungicides Need Help With A Tree Fungus? Or Call 703.573.3029 To Book An Appointment Via Phone Common Tree Fungi Diseases caused by a tree fungus are separated into four categories, root and butt rot, canker, foliar/shoot, and wilts. Root Rot Diseases: Root rot diseases are caused by fungi that are found in the soil and attack the roots of plants. Armillaria Root Rot: Also known as Oak root fungus, is a disease caused by the fungi of the genus Armillaria. If left untreated it will cause rapid decline and death. In the worst cases, when left untreated trees can become structurally unsafe and uproot or snap possibly causing property damage and injury. Symptoms: Dulling of leaf color, loss of vigor, leaves turn yellow or brown, leaves wilt. Targets: This tree fungus has an extremely wide range of hosts. Most trees and shrubs are susceptible to root rot. Learn more about Armillaria Root Rot Phytophthora Root Rot: Phytophthora Root Rot is an extremely damaging and widespread fungus like organism that will rot away root systems and eventually kill your tree if left untreated. In the worst cases, when left untreated trees can become structurally unsafe and uproot or snap possibly causing property damage and injury. Symptoms: Suppressed growth, yellow or undersized needles/leaves, dieback, drooping and curling of leaves, leaves turning brown. Targets: Wide range of plants. The most susceptible include Azalea, rhododendron, dogwood, pieris, yew bushes, deodar cedar, mountain laurel, heather, juniper, Fraser fir, white pine, shortleaf pine, camellia japonica, aucuba. Learn more about Phytophthora Root Rot Canker Diseases are caused by fungi that commonly enter the tree through wounds in the bark or branch stubs. Improper pruning can increase your risk of cankers. Thousands Canker Disease: Originally confined to the western parts of the United States, Thousands Canker Diseases, made it to Fairfax County in 2012. The tree fungus, Geosmithia morbida, is spread by the Walnut Twig Beetle. These fungi develop cankers under the bark so cankers will not be visible. Symptoms: Thinning canopy, discolored leaves, small leaves, individual branch dieback. Targets: Black Walnuts but all species of walnuts may also be susceptible. Learn more about Thousands Canker Disease Phytophtoria Bleeding Cankers: Caused by various species of the Phytophtoria fungi, bleeding cankers are wet looking, oozing areas on the trunk of ornamental and shade trees. These cankers impact the vascular system of the tree, inhibiting important energy transfers. Symptoms: Reddish-brown fluid oozing from a crack in the bark, above the infected area, foliage may be pale and sparse and branch dieback may start to occur, and a strong alcohol, fermenting smell that attracts insects to the infected areas of the tree. Targets: Most ornamental and shade trees; however, beech, maple, and oak tend to be highly susceptible. Learn more about Phytophtoria Bleeding Cankers Also known as Leucostoma canker, this tree fungus is one of the most damaging diseases of spruces. This fungus grows throughout the inner bark causing the portion of the tree behind the canker to die. Symptoms: Death of branches starting at the base of the tree moving upward. Cankers aren’t very noticeable, with little to no bark deformation. Needles on infected branches turn grayish and brown. Targets: Colorado Blue Spruce (and it’s varieties), Norway spruce, koster’s blue spruce, white spruce, Douglas fire, and other spruces. Learn more about Cytospora Canker This tree fungus negatively affects growth and can lead to the death of the tree. This fungus is typically a secondary invader; meaning that it usually does not infect healthy hardwoods but targets stressed or injured trees. Symptoms: At first the cankers show up as light brown or tan and look dry and dusty. Within a few weeks they will turn silvery gray with scattered black spots. Targets: Hardwoods but has three primary species. Hypoxylon atropunctatum found on Oaks, Hypoxylon mammatum found on Aspen, and Hypoxylon tinctor found on Sycamores. Learn more about Hypoxylon Canker Foliar diseases are very common and caused by fungi that attack the leaves of the tree or shrub. Cercospora Leaf Spot: The tree fungus begins as a small spot on the leaves. As the disease progresses more spots appear until the leaf ceases to function as the site of the tree’s food production process and falls off of the tree. Symptoms: Round leaf spots (may have purple or dark brown borders), tiny black flecks (fungal spores) in the center of the spots. Targets: Wide range of ornamentals, shade trees, and plants. Our Arborists report that White Oaks are especially susceptible in our area. Learn more about Cercospora Leaf Spot Anthracnose is a tree fungus that is active in the spring when the weather is wet and cool. Overwintering in fallen leaves, this fungus will continue to infect your tree year after year if not treated. Multiple infestations can leave trees stressed and susceptible to secondary invaders. Symptoms: tan to brown leaf spots which many have purple rings around them, wilting, defoliation, dieback, leaf blotches. Targets: Dogwoods, Ash, Oak, Sycamore, Birch, Walnut, Tulip, Hickory, and Maple Learn more about Antracnose Sooty mold is a fungus that grows on top of honeydew (the excrement of plant-sucking insects) and coats the leaves to the point where they can no longer absorb sunlight. This interrupts photosynthesis and the tree will not be able to produce the nutrients they need for survival. If your trees and shrubs are turning black you most likely have a sooty mold problem caused by an insect infestation. Targets: Typically seen on rose, ash, oak, elm, maples, willow, and fruit trees. Powdery Mildew is a tree fungus that coats leaves blocking the process of photosynthesis. Every year trees and shrubs rely on photosynthesis to create food for new leaf growth. When this process is interrupted by powdery mildew the food reserves aren’t replenished and the tree/shrub’s growth will be stunted which can affect overall health. The stress caused by Powdery Mildew also makes the tree more susceptible to other diseases and insect infestations. Symptoms: Powdery mildew is characterized by spots or patches of white to grayish, talcum-powder like growth on the upper side of leaves. Targets: A wide range of plants but Lilacs, Peonies, Dogwoods, or Crape Myrtles are especially susceptible in this area. Learn more about Powdery Mildew Shot Hole Fungus: This tree fungus is commonly mistaken for insect damage because of the BB-sized holes it leaves. This fungus will stress your plants and should be treated to keep secondary invaders away. Symptoms: Brown or reddish-brown leaf spots, holes in leaves where the leaf spots used to be, yellow leaves dropping in mid-summer. Targets: Cherries and Cherry Laurels Wilt diseases are caused by fungi that invade a tree’s vascular system. With the vascular system compromised the tree cannot transport water and nutrients throughout itself. Verticillium Wilt is caused by the soil-borne fungi Verticillium albo-atrum and Verticillium dahliae. The tree fungus invades through the roots then spreads through the plant’s vascular system. Once the Xylem, the tree’s water transportation system, is infected it becomes clogged and water can no longer reach the tree’s leaves. Verticillium is common and affects several hundred species of trees and shrubs. Symptoms: Leaf curling, drying, small yellow foliage, leaf scorch, and slow growth. Often times the symptoms are seen on one side or section. Targets: Ash, Azalea, Cherry, Certain species of Dogwood or Linden, Locust, Magnolia, Maple, Oak, and Redbud. Learn more about Verticillium Wilt Oak wilt is a disease that targets oak trees and is caused by the fungus Ceratocystis fagacearum. Spread through insects and connections between roots, there are no resistant or immune oak species. This illness was first found in 1944 in Wisconsin but has now spread to 21 states. Oak wilt is devastating and can kill rapidly within a single season. Symptoms: Leaf discoloration, wilt, defoliation, and ultimately the death of the tree from the top down. Targets: All species of oaks. Red oaks succumb to the diseases faster than white oaks. Learn more about Oak Wilt Dutch Elm Disease: Dutch Elm disease, one of the most destructive shade tree diseases in North America, is caused by a fungus spread by the elm bark beetle. First reported in the U.S in 1928, the disease is believed to have been brought over from the Netherlands in a shipment of logs. Out of the 77 million elms in North America in 1930, over 75% had been lost by 1989. To this day, the Elm population across the United States is still battling this toxic disease. Symptoms: Dutch Elm Disease causes leaf wilting, curling and yellowing of leaves, leaf drop, and will kill your tree. Learn More About Dutch Elm Disease Didn’t Find What You Were Looking For? Check out our Diseases and Bug indexes. Worried your tree is infected with one of these fungi? Use our online booking system or call 703.573.3029 to schedule a consultation with an arborist to diagnose your tree fungus.
A simple tool to generate random colors for your design needs. How does a random color generator work? You can generate random colors in the following steps: - Enter the color format you wish the color to be in. This tool offers HEX, RGB, and HSL color formats that you can work on. - Choose the number of colors you want to generate. - Click on the Create button. The tool will then pick a random color based on the input you provided. What is Random Color Generator? When you start working on any project or program, you might need an attractive User interface full of colors. But sometimes, it’s not easy to come up with an attractive color scheme on your own. Here, a tool like a random color generator could help you in the selection of the right color combination Use this tool and have fun playing around with it as you generate adorable random color palette combinations for your projects. The random color generator will help you to find the random color you may not have thought of using. It can especially help you in brainstorming your next great design. It's never been easy to generate a random color palette. You just have to enter the color format, type in the number of colors you want, and select the Create button. If you are looking for a different color, follow the same steps again until you come with a great color palette. The best part is that the random color picker is free and can be used as many times as you want it to Random Hex code color generator A Hex Code Color is a color generated from hexadecimal values. A Hex code starts with a # followed by six values. The six hexadecimal values are represented by 3 pairs i.e Red, Green, and Blue. This color code is used often in web designing. For example, if you use #FFDD00 for the yellow color Random RGB code color generator RGB stands for Red, Green, Blue. Each value defines the intensity and scale between 0 to 255. Turn off all three elements then the RGB color result is Black and while if all elements are lit up at full brightness then the RGB color result is white. RGB values are used in HTML, CSS, XHTML and other websites. For example, if you use #1f779f, RGB value is (31,119,159) blue. Random HSL code color generator. HSL stands for Hue, Saturation, Lightness. HSL is represented as an angle of the color circle with a value between 0 to 360. The saturation is a shade of gray color and the percentage value from 0% to 100% is full of color. Lightness is also a percentage: 0% is black and 100% is white. For example, if you use HSL (0, 100%, 50%) the HSL coder will generate a red color
The first Briton sailed to the New World only seven years after Columbus, a long-lost royal letter reveals. Written by Henry VII 510 years ago, it suggests Bristol merchant William Weston headed for America in 1499. In his letter the king, right, instructs his Chancellor to suspend an injunction against Weston because "he will shortly with God's grace, pass and sail for to search and find if he can the new found land". Bristol University's Dr Evan Jones believes it was probably the earliest attempt to find the North-West Passage - the searoute around North America to the Pacific. He said: "Henry's letter is exciting because so little is known about early English voyages of discovery. Nobody has heard of William Weston yet this letter reveals him to be the first Englishman to lead an expedition to America." The letter came to light by chance. It was found in 1981 and passed to historian David Beers Quinn. He failed to publish it, preferring to wait for historian Alwyn Ruddock to release research on explorer John Cabot. In her will Dr Ruddock ordered all her notes destroyed. This spurred Dr Jones to discover what she had found. It was while doing this that he came across the letter. Before it was unearthed the first English expedition was thought to be by Robert Thorne and Hugh Eliott in 1502. Columbus landed in the Bahamas in 1492.
The WHO defines health as a “state of complete physical, mental, and social well-being, and not merely the absence of disease or infirmity” . Scientific evidence has shown the major beneficial effects of physical activity on all three aspects of health . Numerous physical, mental, and social health benefits of physical activity are presented in Figure 2. The physical fitness and health status of children and adolescents may be substantially improved by regular partic-ipation in physical activity. Compared to their inactive peers, physically active children and adolescents have higher levels of cardiorespiratory fitness, muscular endurance, and strength. The well-documented health benefits include a reduced risk of obesity, more favourable cardiovascular and metabolic disease risk profiles, enhanced bone health, and improved mental health4, Figure 2. Key health benefits of physical activity (adapted from Pedisic 2011) - Allender, S., Cowburn, G. & Foster, C. 2006. Understanding participation in sport and physical activity among chil-dren and adults: a review of qualitative studies. Health Education Research, 21(6), 826-835. DOI: 10.1093/her/cyl063. - Biddle, S. & Asare, M. 2011. Physical activity and mental health in children and adolescents: a review of reviews. British Journal of Sports Medicine, 45, 886–895. DOI: http://dx.doi.org/10.1136/bjsports-2011-090185. - Bouaziz, W., Lang, P.O., Schmitt, E., Kaltenbach, G., Geny, B. & Vogel, T. 2016. Health benefits of multicomponent training programmes in seniors: a systematic review. International Journal of Clinical Practice, Jul; 70(7), 520–536. DOI: 10.1111/ijcp.12822. - Burt, L.A., Greene, D.A., Ducher, G. & Naughton, G.A. 2013. Skeletal adaptations associated with pre-pubertal gym-nastics participation as determined by DXA and pQCT: A systematic review and meta-analysis. Journal of Science and Medicine in Sport 16, 231–239. DOI: http://dx.doi.org/10.1016/j.jsams.2012.07.006. - Gomez-Bruton, A., Montero-Marín, J., González-Aqüero, A., García-Campayo, J., Moreno, L.A., Casjus, J.A. & Vicen-te-Rodríguez, G. 2016. The Effect of Swimming During Childhood and Adolescence on Bone Mineral Density: A Sys-tematic Review and Meta-Analysis. Sports Medicine, 46, 365–379. DOI: 10.1007/s40279-015-0427-3. - Eime, R.M., Young, J.A., Harvey, J.T., Charity, M.J. & Payne, W.R. 2013a. A systematic review of the psychological and social benefits of participation in sport for children and adolescents informing development of a conceptual model of health through sport. International Journal of Behavioral Nutrition and Physical Activity, 10(98). http://ijbnpa.biomedcentral.com/articles/10.1186/1479-5868-10-98. - Eime, R.M., Young, J.A., Harvey, J.T., Charity, M.J. & Payne, W.R. 2013b. A systematic review of the psychological and social benefits of participation in sport for adults: informing development of a conceptual model of health through sport. International Journal of Behavioral Nutrition and Physical Activity, 10 (135). DOI: 10.1186/1479-5868-10-135. - Janssen, I., & LeBlanc, A.G. 2010. Systematic review of the health benefits of physical activity and fitness in school-aged children and youth. International Journal of Behavioral Nutrition and Physical Activity, May;7(40). DOI: 10.1186/1479-5868-7-40. - Okely, A.D., Salmon, J., Vella, S.A., et al. 2012. A Systematic Review to update the Australian Physical Activity Guide-lines for Children and Young People. Report prepared for the Australian Government Department of Health. Canber-ra: Commonwealth of Australia. - Smith, J.J., Eather, N., Morgan, P.J., Plotnikoff, R.C., Faigenbaum, A.D. & Lubans, D.R. 2014. The Health Benefits of Muscular Fitness for Children and Adolescents: A Systematic Review and Meta-Analysis. Sports Medicine, 44, 1209–1223. DOI: 10.1007/s40279-014-0196-4. - St George, A., Kite, J., Hector, D., Pedisic, Z., Bellew,B. & Bauman, A. 2014. The Healthy Eating and Active Living Strategy – Additional Health Benefits: an Evidence Check brokered by The Sax Institute (www.saxinstitute.org.au) for the NSW Ministry of Health. - Tsang, T.W.M., Kohn, M., Chow, C.M. & Singh. M.F. 2008. Health benefits of Kung Fu: A systematic review, Journal of Sports Sciences, 26(12), 1249-1267, DOI: 10.1080/02640410802155146. Table 1 health outcomes
Eating disorders are serious disturbances in eating behavior, such as extreme and unhealthy reduction of food intake or severe overeating. They also occur with feelings of distress or excessive concern about body shape or weight. The main types of eating disorders are anorexia nervosa , bulimia nervosa , and binge eating disorder . Eating disorders often develop during adolescence or early adulthood, but may also start during childhood or later in adulthood. Females are much more likely than males to develop an eating disorder. Eating disorders frequently occur with other psychiatric conditions, such as depression , substance abuse , and anxiety disorders . In addition, people with eating disorders can experience a range of physical health complications. While some of these are minor, others can cause serious heart conditions, kidney failure , and even death. Anorexia nervosa is an eating disorder in which you are obsessed with dieting and exercise, which leads to excessive weight loss. You are generally considered anorexic when you do not maintain your body weight at or above 85% of your expected weight. If you have bulimia nervosa, you feel overly concerned with your weight and body image. Bulimia nervosa is an eating disorder in which you compulsively eat large amounts of food. This is called binging. Then, you use unhealthy means, such as vomiting, laxatives, or water pills, to rid your body of the food. You may also diet or engage in extreme amounts of exercise to use up calories taken in through binging. Binge Eating Disorder If you have binge eating disorder, you eat excessive amounts of food within a short period of time. Episodes of binge eating are associated with at least three of the following: - Eating more rapidly than normal - Eating until you feel uncomfortably full - Eating large amounts of food although you don’t feel hungry - Eating alone due to embarrassment about the amount of food you eat - Feeling disgusted about yourself, depressed, or guilty about your eating behavior During an episode, you feel a lack of control over your eating. On average, binge eating occurs at least two days a week for six months. You do not purge your body of the excess calories; therefore, you may be overweight for your age and height. During and after a binge, you feel self-disgust and shame, which can lead to another binge.What are the risk factors for eating disorders?What are the symptoms of eating disorders?How are eating disorders diagnosed?What are the treatments for eating disorders?Are there screening tests for eating disorders?How can I reduce my risk of eating disorders?What questions should I ask my doctor?What is it like to live with eating disorders?Where can I get more information about eating disorders? - Reviewer: Michael Woods, MD - Review Date: 05/2015 - - Update Date: 05/20/2015 -
read.cash is a platform where you could earn money (total earned by users so far: $ 768,985.51). You could get tips for writing articles and comments, which are paid in Bitcoin Cash (BCH) cryptocurrency, which can be spent on the Internet or converted to your local money. Life is not merely living but living in health Good health is the most important requirement and a precious asset of life. To achieve success and happiness and to enjoy life, good health is essential. Physical exercise is the pre-requisite to good health. The regular movement of the limbs of the body according to rules is termed as physical exercise. It's aim is to keep a body fit and hardy. Physical exercise is good for us in a number of ways. We know that every part of a machine has its individual function and unless each part is set to work, the whole woman machine remains inoperative. Similarly our body is a machine and the limbs of the body should be allowed to do their individual functions to keep a body active. Physical exercise helps in proper functioning of the body. It is necessary for the regular circulation of blood, proper digestion of food and for systematic function of the nerval system. Many of the diseases can be prevented or controlled through particular type of exercise. Excess fat, coronary heart diseases, diabetes, asthma etc can be effectively controlled by means of physical exercise. Physical exercise enables a person to live a long life. The average longevity of the people who take exercise is relatively higher than the other countries. It is said, "A sound mind lives in a sound body." If we were healthy, we would be able to keep jolly all the time and enjoy life at full length. In brief, physical exercise is essential for leading a healthy, happy and successful life on earth. There are many forms of physical exercise. Each one is suitable for the people of particular age. It can be indoor or can be done with instruments. Running, swimming, horse riding, rowing, cycling, gymnastics and such other exercises are good for the limbs. Those who are not capable of doing these may try walking. Different games like football, cricket and country games like ha-du-du, dariabandha are among the top class exercises. We should always keep in mind that all kinds of physical exercise are not suitable for all because different people have different capacities. Obviously harder exercises like wresting and gymnastics are suitable for the young people who have the energy to perform them. Weaker and old people should take the milder exercises like walking, jogging and free-hand exercises. We should keep in mind thet over exercise never does good rather breaks down the health. So we should take such exercises as would suit us. Physical exercise is important for the preservation of health. It also builds our character. The exercise learnt at an early age will help us enjoy better health and make better soldiers of us in our struggle of life. So we should practise it from our early years. Games and sports are the source of enjoyment and exercises too, amusement and pleasure. Man are rational beings and they need to develop their faculties to expose themselves in various fields. So, they require the soundness of body and mind for which games and sports are necessary. Games and sports are not only the sources of pleasure and amusement but also the means of keeping physical exercise to fit and establishing a relation between two rival groups by eradicating the conflicts and strifes. There are various kinds of sports like swimming, boating, running, jumping, and games like football, cricket, hockey, tennis, badminton, volley ball, basket ball etc. Sports have a great significance in our individual as well as our collective life. It builds our body and gives us more energy in work. It gives us skills, discipline, sense of co-operation and team-spirit. Some games call forth courage and the persence of mind. Sports and games are joyful and activities too. A true sportsman is frank, generous and free petty spite. There are sportsman like qualities. There is no doubt that many outdoor games, such as football, cricket and hockey are good for growing boys. They provide physical exercise, which is necessary for health, in an interesting form. Moreover, such games, by training boys to work together in a team, teach corporate discipline and so promote what is called esprit de corps. Games form a valuable part of school education. They help in the moral training of boys. They teach certain necessary moral lessons, and in a way boys can understand that the playing of games promotes co-operation, sense of fair-play, obedience to rules, self-control, and sacrifice or self for the good of the whole. The proverb says, "All work and no play makes Jack dull boy." The sports can refresh and supply fresh vigour to our body and mind. We should remember that a sound mind exists in a sound body.
Just as city slickers have faster-paced lives than country folk, so too do urban birds, compared with their forest-dwelling cousins. The reason, researchers report today, is that urban noise and light have altered the city birds' biological clocks. The finding helps to explain prior reports that urban songbirds adopt more nocturnal lifestyles—data that prompted Davide Dominoni, an ecologist at the Max Planck Institute for Ornithology in Radolfzell, Germany, to investigate whether the birds' activity patterns were merely behavioral responses to busy cities or were caused by an actual shift in the animals' body clocks. For the study, published in Proceedings of the Royal Society B, Dominoni and his colleagues set up an experiment with European blackbirds (Turdus merula). The scientists attached tiny 2.2-gram radio-pulse transmitters to blackbirds living in Munich, Germany, as well as to those living in a nearby forest. The transmitters monitored the birds' activity for three weeks. Dominoni found that whereas forest birds started their activity at dawn, city birds began 29 minutes earlier, on average, and remained active for 6 minutes longer in the evening. Keen to determine these differences were due to physiological changes, Dominoni collected blackbirds from both locations and placed them into light- and sound-proof enclosures. For ten days these enclosures were illuminated with a constant, dim light so the birds had no idea what time of day it was, and their activity patterns were monitored. The researchers found that the city birds in the enclosures had faster biological clocks than forest birds. It took the city birds an average of 50 minutes less to go through a full 24-hour cycle of activity than it took forest birds. And without the external stimuli of dawn and dusk, the urban birds' behavioral rhythms weakened rapidly, with their periods of activity and rest becoming more irregular than those of the forest birds. Having such weakly set biological clocks could be a boon for the blackbirds. "It could make them better at coping with city environments that are not as predictable as the wilderness," says Dominoni. But such clocks could also potentially have adverse health effects. "You have to wonder — if these city birds are not compensating by napping during the day or sleeping more deeply at night, is sleep deprivation reducing their cognitive abilities or shortening their life spans?" says Niels Rattenborg, an avian sleep biologist at the Max Planck Institute for Ornithology in Seewiesen, who was not associated with the study. Still to be determined, Dominoni says, is whether humans who live in cities also have altered circadian rhythms. That is a question he hopes to address in future research. Others wonder whether birds' biological clocks are altered permanently by city life. "I'd be really interested in seeing an experiment where urban birds are transplanted to a rural environment, and vice versa," says Daniel Mennill, an ornithologist at the University of Windsor in Ontario, Canada. "Would the urban birds continue to wake up early? Would country birds change? We just don't know."
|RESULTS: The true incidence of autism spectrum disorders is likely to be within the range of 30-60 cases per 10 000, a huge increase over the original estimate 40 years ago of 4 per 10000. The increase is largely a consequence of improved ascertainment and a considerable broadening of the diagnostic concept. However, a true risk due to some, as yet to be identified, environmental risk factor cannot be ruled out. There is no support for the hypothesis for a role of either MMR or thimerosal in causation, but the evidence on the latter is more limited. CONCLUSION: Progress in testing environmental risk hypotheses will require the integration of epidemiological and biological studies. I do not have acess to the full text - I would be interested in what data was used. This study is from 2005. With the numbers increasing anually from what I understand. Are health care providers improving their diagnostics from year to year? I would be interested to see any subsequent biologocal studies. |Rates of diagnosis of autism have risen since 1980, raising the question of whether some children who previously had other diagnoses are now being diagnosed with autism. We applied contemporary diagnostic criteria for autism to adults with a history of developmental language disorder, to discover whether diagnostic substitution has taken place. A total of 38 adults (aged 15-31y; 31 males, seven females) who had participated in studies of developmental language disorder during childhood were given the Autism Diagnostic Observation Schedule--Generic. Their parents completed the Autism Diagnostic Interview--Revised, which relies largely on symptoms present at age 4 to 5 years to diagnose autism. Eight individuals met criteria for autism on both instruments, and a further four met criteria for milder forms of autistic spectrum disorder. Most individuals with autism had been identified with pragmatic impairments in childhood. Some children who would nowadays be diagnosed unambiguously with autistic disorder had been diagnosed with developmental language disorder in the past. This finding has implications for our understanding of the epidemiology of autism. How conclusive can a study of 38 people be? Perhaps indicative of a trend, but far from conclusive. (One of the 'problems' with the Wakefield study is that it was done on 12 children - enough to perhaps indicate a trend, but nothing conclusive ). Again, I don't have access to the full text. This is the newsest of the 3 studies posted by Carrie, and is a study done in the UK from what I can understand in the abstract. I wonder if a similar study done in the USA would be draw the same conclusions? |METHODS: Literature review and interpretation. I would like to see what literature was reviewed, but don't have access to the full text |CONCLUSIONS: There has (probably) been no real increase in the incidence of autism. There is no scientific evidence that the measles, mumps and rubella (MMR) vaccine or the mercury preservative used in some vaccines plays any part in the aetiology or triggering of autism, even in a subgroup of children with the condition. From the study that reviewed literature. I am still trying to get my head around this. I find it quite weird that these studies seek to establish no true increase in the incidence of autism while simulatneously seeking to establish no link between autism and vaccines, specifically thimerosol and MMR. And those who have read the studies proving thimerasol to be just fine know how flawed they are. The information available really is a mess. I am not yet sure for myself that there is no true increase in the incidence in autism. If there were no true increase what would that mean? Who stands to gain by a study that finds no increase in incidence? I also did not see who funded the studies. That might help answer some of my questions. It does seem to be fairly well established that the environment for the fetus and newborn are critical. It just looks dishonest to not try and figure out if vaccines are not an environmental trigger for some chidren. Including vaccinating pregnant women. And a perception of dishonesty will undermine trust in the scietists who keep concluding there is no link, with highly flawed studies. Some more dreaming, But I think the quickest and most cost effective way to improve parent trust in vaccination is to compare unvaccinated children with vaccinated and see if anything significant emerges. Speak to parents who are concerned and ask them what answers they are looking for - design the study in conjunction with them and take it from there. Trying to intimidate and mandate is only going to work to the disadvantage of those who view vaccination as essential to public health. It shouts "I do not have any real facts, but I am going to bully you anyway"
August · September · October · November · December August is the eighth month of the year in the Julian and Gregorian Calendars. It is a summer month in the Northern Hemisphere, and a winter month in the Southern Hemisphere, where it is the seasonal equivalent of February in the Northern Hemisphere. Hoyt's New Cyclopedia Of Practical Quotations - Quotes reported in Hoyt's New Cyclopedia Of Practical Quotations (1922), p. 46. - The August cloud * * * suddenly Melts into streams of rain. - William Cullen Bryant, Sella. - In the parching August wind, Cornfields bow the head, Sheltered in round valley depths, On low hills outspread. - Christina G. Rossetti, A Year's Windfalls, Stanza 8. - Dead is the air, and still! the leaves of the locust and walnut Lazily hang from the boughs, inlaying their intricate outlines Rather on space than the sky -on a tideless expansion of slumber. - Bayard Taylor, Home Pastorals, August.
perhaps help to suggest the point of view which I am trying to indicate, to say that in the cases we have been considering the proposition occurs as a fact, not as a proposition. Such a statement, however, must not be taken too literally. The real point is that in believing, desiring, etc., what is logically fundamental is the relation of a proposition considered as a fact, to the fact which makes it true or false, and that this relation of two facts is reducible to a relation of their constituents. Thus the proposition does not occur at all in the same sense in which it occurs in a truth-function. There are some respects, in which, as it seems to me, Mr Wittgenstein's theory stands in need of greater technical development. This applies in particular to his theory of number (6.02 ff.) which, as it stands, is only capable of dealing with finite numbers. No logic can be considered adequate until it has been shown to be capable of dealing with transfinite numbers. I do not think there is anything in Mr Wittgenstein's system to make it impossible for him to fill this lacuna. More interesting than such questions of comparative detail is Mr Wittgenstein's attitude towards the mystical. His attitude upon this grows naturally out of his doctrine in pure logic, according to which the logical proposition is a picture (true or false) of the fact, and has in common with the fact a certain structure. It is this common structure which makes it capable of being a picture of the fact, but the structure cannot itself be put into words,since it is a structure of words, as well as of the facts to which they refer. Everything, therefore, which is involved in the very idea of the expressiveness of language must remain incapable of being expressed in language, and is, therefore, inexpressible in a perfectly precise sense. This inexpressible contains, according to Mr Wittgenstein, the whole of logic and philosophy. The right method of teaching philosophy, he says, would be to confine oneself to propositions of the sciences, stated with all possible clearness and exactness, leaving philosophical assertions
A year ago, Dan Wilson knew "absolutely nothing" about silica sand mining. Today, Wilson a 23-year-old from Winona, is among those who seek to prevent the controversial mining method near Winona. "This is our sand. It is our responsibility of what happens to it," Wilson said. Geologists have long known about the strong, round sand buried beneath the bluffs near the Mississippi River. For decades, companies have mined it for window glass and water filtration products. But the practice has drawn widespread condemnation among environmentalists and others since energy companies discovered it helps extract oil and natural gas from the ground. The sand is used in a natural gas extraction process called hydraulic fracturing, or fracking. When the sand is forced into underground rock formations it breaks up the stone, releasing large amounts of natural gas. The hard Minnesota sand is perfect for fracking, because it can withstand the intense pressure needed to break rock. Silica sand mining is a divisive topic in southeastern Minnesota. Local officials have held town hall meetings with residents, met with environmentalists and industry leaders, and passed moratoriums on mining so they can study the practice that has already swept parts of Wisconsin. Nine local moratoriums in Minnesota have given rise to a public movement. "We heard about frac sand coming to Winona about six months ago," Wilson said. "[We] got a sense for what the sand is being used for, and then ...attended a lot of meetings, wrote a lot of letters, talked to council members." Area residents also started to hold rallies against silica sand mining, an industry that has roughly doubled in size since 2008. Wilson helped organize a recent rally outside Winona City Hall, where a couple dozen people formed a long line on the sidewalk, and held colorful signs that read "How did this happen?" and "Don't fracture Winona." "If we believe that [the sand] is being used for something bad and immoral, then we have a right to talk about that; we have a right to do something about it," Wilson said. Winona County has the shortest of all the moratoriums that have passed in southeastern Minnesota. It expires May 1. County officials expect at least eight permit applications for new mines by mid-May. The proposed sites are scattered throughout the county and represent less than 1 percent of its land. Long-time residents like Marianna Byman say that small figure doesn't matter. "If a town is going to change its character, there needs to be some warning," Byman said. "And people need to have input and not be caught unaware by this." Byman, a history professor at the University of Winona, said a silica sand processing facility near the river is already changing the feel of downtown. "The town is already filling up with dust and sand and trucks," she said. "This isn't the Winona that we fell in love with and I think everybody just spontaneously has just become very alarmed in a very short period of time." Winona County Planning and Environmental Services Director Jason Gilman has heard all sorts of perspectives from local residents in the last six months. "We're seeing this enormous range of comment," he said. But Gilman knows county officials need to decide how best to regulate the new mining projects that may come here. County officials voted last year to make mining companies pay for road damages. The county also will require mining companies to submit environmental, geological, road and traffic impact studies for proposed mines. The companies also will have to submit a plan to reclaim the mines after mining operations are complete, Gilman said. A big question for Winona County and other communities where silica sand is present is how far the mining operations will go. "We get asked that question all the time. 'What's the end game?' or 'How big is this going to get?' " Gilman said. "Of course, we don't really know that but ... From what I've been hearing and the data I've been looking at, this is going to get pretty big. The moratorium may have put a temporary block on new sand mining in Winona County. But the sand at a processing plant in downtown Winona is coming from across the river in Wisconsin. Near the corner of Harriet and Second streets in downtown Winona, trucks haul load after load of silica sand from a giant stockpile onto rail carts. Officials with Modern Transport, the company that runs the transportation facility, declined to comment. But nearby businesses like Chrysler Winona have noticed more activity in the area. "People have come by and said they can't tell the color of the vehicles," said Andy Puetz, general manager at the dealership. "If people can't tell even the color of it, you can really tell there's been sediment in the air and some fallout from operations here in the area." Puetz said the local economy could benefit from new mining operations. But he's had to hire additional workers and is spending $2,000 a month just to keep the 130 cars on his lot clean. Winona city officials are considering a separate moratorium vote on March 19 which could halt mining expansion within the city limits for at least a year.
Find an exponential function that passes through each pair of points. To start, write a system of equations in the form by substituting the points in for and . nd equation into the first to eliminate . value you found into one of the original equations. and back into the original equation with along with and . and
Human inner ear organoid with sensory hair cells (cyan) and sensory neurons (yellow). An antibody for the protein CTBP2 reveals cell nuclei as well as synapses between hair cells and neurons (magenta). Credit: Image courtesy of Karl Koehler Researchers at Indiana University School of Medicine have successfully developed a method to grow inner ear tissue from human stem cells — a finding that could lead to new platforms to model disease and new therapies for the treatment of hearing and balance disorders. “The inner ear is only one of few organs with which biopsy is not performed and because of this, human inner ear tissues are scarce for research purposes,” said Eri Hashino, PhD, Ruth C. Holton Professor of Otolaryngology at IU School of Medicine. “Dish-grown human inner ear tissues offer unprecedented opportunities to develop and test new therapies for various inner ear disorders.” The study, published online May 1 in Nature Biotechnology, was led by Karl R. Koehler, PhD, assistant professor in the Department of Otolaryngology and Head and Neck Surgery at IU School of Medicine, and Dr. Hashino in collaboration with Jeffrey Holt, PhD, professor of otology and laryngology at Harvard Medical School and Boston Children’s Hospital. The research builds on the team’s previous work with a technique called three-dimensional culture, which involves incubating stem cells in a floating ball-shaped aggregate, unlike traditional cell culture in which cells grow in a flat layer on the surface of a culture dish. This allows for more complex interactions between cells, and creates an environment that is closer to what occurs in the body during development, Dr. Koehler said. By culturing human stem cells in this manner and treating them with specific signaling molecules, the investigators were able to guide cells through key processes involved in the development of the human inner ear. This resulted in what the scientists have termed inner ear “organoids,” or three-dimensional structures containing sensory cells and supporting cells found in the inner ear. “This is essentially a recipe for how to make human inner ears from stem cells,” said Dr. Koehler, lead author of the study and whose research lab works on modeling human development. “After tweaking our recipe for about a year, we were shocked to discover that we could make multiple inner ear organoids in each pea-sized cell aggregate.” The researchers used CRISPR gene editing technology to engineer stem cells that produced fluorescently labeled inner ear sensory cells. Targeting the labeled cells for analysis, they revealed that their organoids contained a population of sensory cells that have the same functional signature as cells that detect gravity and motion in the human inner ear. “We also found neurons, like those that transmit signals from the ear to the brain, forming connections with sensory cells,” Dr. Koehler said. “This is an exciting feature of these organoids because both cell types are critcal for proper hearing and balance.” Dr. Hashino said these findings are “a real game changer, because up until now, potential drugs or therapies have been tested on animal cells, which often behave differently from human cells.” The researchers are currently using the human inner ear organoids to study how genes known to cause deafness interrupt normal development of the inner ear and plan to start the first-ever drug screening using human inner ear organoids. “We hope to discover new drugs capable of helping regenerate the sound-sending hair cells in the inner ear of those who have severe hearing problems,” Dr. Hashino said. Karl R Koehler, Jing Nie, Emma Longworth-Mills, Xiao-Ping Liu, Jiyoon Lee, Jeffrey R Holt, Eri Hashino.Generation of inner ear organoids containing functional hair cells from human pluripotent stem cells. Nature Biotechnology, 2017; DOI: 10.1038/nbt.3840
Regular heartburn is the main symptom of GERD. Heartburn is a feeling of burning behind the breastbone. It can occur at anytime, but is often aggravated by overeating or lying down after a big meal. Many also have regurgitation, a feeling of food and fluid moving back up the throat or into the mouth. The fluids from the stomach can cause: - Sour or bitter taste in the back of mouth or throat - Feeling of a lump in the throat - Bad breath The regular reflux of stomach acid can cause irritation of the tissue and other structures of the throat. This irritation can lead to other symptoms, such as: - Sore throat - Chronic laryngitis - Chronic cough - Wheezing or trouble breathing - Excessive clearing of throat Infants with GERD may also have recurrent vomiting. This can affect their ability to get proper nutrition and slow growth and development. Long-term complications of GERD may include: - Inflammation of the esphagus—esophagitis - Bleeding and ulcers in the esophagus - Narrowing of the esophagus—esophageal stricture - Dental problems, which may occur because of the effect of stomach acid on tooth enamel - Asthma attacks - During sleep, acid refluxes from the stomach into the throat, then drains into the lungs—aspiration pneumonia - A precancerous condition that can lead to esophageal cancer— Barrett’s esophagus - Esophageal cancer The muscles of the esophagus can tighten or spasm. This can cause pain that radiates through the chest and back, similar to how a heart attack may feel. Do not assume that chest pain is an esophageal spasm. If you have chest pains or other symptoms of a possible heart attack, call for emergency medical services right away. - Squeezing or chest pressure - Pain in the left shoulder, left arm, or jaw - Trouble breathing - Sweating, clammy skin - Pain that starts during activity or stress - Feeling of impending doom - Reviewer: Daus Mahnke, MD - Review Date: 05/2015 - - Update Date: 05/20/2015 -
Literacy in Every Classroom In January, we return to our classrooms with great aspirations for our students and for ourselves. The opportunity to catch up on my own reading recharged my batteries and left me inspired by the rich classroom examples of teachers using literacy as the bridge to meaningful collaboration. Here are a few that caused me to hit the share button: There is a contagious energy that jumps from these stories of partnership grounded in literacy. It is clear that the teachers have been impacted by their work together as well as by the students. I find myself wondering about the processes leading to outcomes such as these. It's refreshing when the curtain is pulled back to reveal the realities of the process. - A group of literacy coaches invites us into their learning and reminds us of the role of professional readings and the conversations worth having when we get together (free login required). So that others may find inspiration and energy, we invite you to share your story of collaboration, whether it be the artifacts that demonstrate a shared commitment to literacy or the lessons learned from working together. -- Sharon Roth, Senior Developer Professional Learning Opportunities National Council of Teachers of English and the National Center for Literacy Education Sign up now for an RSS feed of each week's INBOX Ideas!
Optimal performance with minimum environmental impact According to experts, lighting accounts for approximately 20 percent of this country's energy usage. And in a typical building, it can add up to nearly half. Sustainable lighting practices try to level the playing field - delivering optimal lighting performance with the least impact on the physical environment. Most people equate this with energy efficiency. But it involves much more. Today's smaller bulbs and fixtures require less material to manufacture and result in less waste when they need to be replaced. Loeb Electric experts can design a sustainable lighting program that ensures you are: - Employing the most energy efficient light source for each requirement - Reducing overlighting - especially in outdoor areas - Improving environmental and regulatory compliance
Eye care is not thought about until there is an issue. Even if you’ve noticed your vision deteriorating, it may not be too late to take action. The information given here is designed specifically for those who do not want to wait until it is too late. Read on to learn how to care for your eyes efficiently. To help maintain good eye health it is important that you regularly see a professional who is properly trained to treat this area. Ask your family or friends to recommend a good eye doctor. If you do so, you will know that your eye care is in good hands. When you go out on a sunny day, be sure that you wear a pair of sunglasses that offer UV protection. The rays from the sun can be damaging to your eyes if they are exposed to the sun too long. Make sure that the lenses are from a reputable manufacturer. Get your eyes checked every year. Your eyecare professional can examine your eyes to make sure that there are no underlying problems that are developing. Even if you have good vision, it is important to get your eyes examined once a year. Doing this regularly will ensure that you will have healthy eyes as you get older. Consume oily fish several times each week. These are high in omega3 fatty acids. These acids are incredibly beneficial to eye health in addition to other parts of your body. Vary your selection from wild salmon, tuna and mackerel. The more you eat, the healthier your vision will be from it. If you work in an environment where particles or objects may become airborne, wear safety goggles. Though many construction sites require them, other professions may not. Look around at your work environment. Consider how the various objects may encounter your eyes. If you perceive potential danger, purchase a pair of safety glasses. As you probably already know, smoking is bad for your overall health. What you may not have known is that it is actually bad for your eye health, too. It can lead to a number of eye conditions, such as optic nerve damage, cataracts, and macular degeneration. Do your best to quit smoking to avoid these conditions. If you wear contact lenses, avoid wearing them while you sleep or for more than 19 hours. Unless you are wearing special lenses that are made for wearing overnight, your contact can deprive your eyes of oxygen and lead to extreme discomfort and possibly serious permanent damage to your sight. Assist your eyes through the use of good sunglasses. Good sunglasses block UV rays that can damage the eyes. Select sunglasses that block 100 percent of UVB and UVA rays. If you are someone who drives a lot, think about polarized lenses. These glasses can greatly help to reduce glasre. Even if your contacts offer UV protection, wearing sunglasses is still important. Avoid looking at your computer screen for too long. Take a break every half hour to give your eyes a rest from the strain. Staring at your computer can cause dry eye because you do not blink as often, so make an effort to blink every 30 seconds while you are at your computer. Choose a thick, dense eye creme to ensure the skin around your eyes stays taught and firm. Make sure the product you choose includes essential fatty acids as they are a necessity for your most delicate skin. If you are a teen, the time is now to start, but even adults can benefit from starting later. Keeping up with your routine eye exams is critical to maintaining eye health. If you’re older, check your eyes more frequently. Older people are more likely to develop glaucoma or cataracts. Regular examinations provide your eye care professional the opportunity to detect problems early on. To diminish puffy eyes, use slices of raw potato. Cut the potato into half circles and place over your closed eyes. If you prefer, you can grate the potato and place in some Muslin cloth, then squeeze excess liquid out and place on closed eyes. Leave either on for 15-20 minutes for best effect. It is very possible to have an eye condition and not even know it; some conditions do not even produce any symptoms. This is why it is important to see an eye doctor each year, something most people neglect to do. An eye doctor can take a thorough look at your eyes and investigate any problems he or she may find. When you are outdoors, wear sunglasses. Sunglasses can protect your eyes by blocking harmful rays from the sun. These rays, called ultraviolet rays, can contribute to cataracts as well as macular degeneration. Blocking the rays with sunglasses allows you to protect your eyes while also allowing you to look fashionable. Practice good makeup hygiene. Makeup worn on and around the eyes, particularly mascara, can be a breeding ground for bacteria. If you want to ensure that your eyes stay clear and free from infection, take a few precautions. Mascara should be tossed after three months. In addition, avoid putting liner inside of the eyelash. This can block the oil glands necessary to keep your eyes protected. Make sure your living and working spaces have enough light. You may not think very much about whether your working and living spaces have enough light, but the truth is that it is important. If your environment is too dim, your eyes may start aching, or your head can hurt. Pay conscious attention to how well-lit a room is, so you can add more light if necessary. Millions of people are concerned about their vision, as you most likely are. When people have eye or vision issues, they take the time to understand them. This article has great information on good eye care. Use them so that you can get the good benefits for your eyes.
Beautifully engraved Blue Certificate from the famous International Business Machines Corporation (IBM) issued no later than 1985. This historic document has an ornate block border with a vignette Mercury flying over the globe. This item has the printed signature of the company's officers and is over 25 years old. IBM was incorporated in the state of New York on June 15, 1911 as the Computing-Tabulating-Recording Company. But its origins can be traced back to 1890, during the height of the Industrial Revolution, when the United States was experiencing waves of immigration. The U.S. Census Bureau knew its traditional methods of counting would not be adequate for measuring the population, so it sponsored a contest to find a more efficient means of tabulating census data. The winner was Herman Hollerith, a German immigrant and Census Bureau statistician, whose Punch Card Tabulating Machine used an electric current to sense holes in punch cards and keep a running total of data. Capitalizing on his success, Hollerith formed the Tabulating Machine Co. in 1896. In 1911, Charles R. Flint, a noted trust organizer, engineered the merger of Hollerith's company with two others, Computing Scale Co. of America and International Time Recording Co. The combined Computing-Tabulating-Recording Co., or C-T-R, manufactured and sold machinery ranging from commercial scales and industrial time recorders to meat and cheese slicers and, of course, tabulators and punch cards. Based in New York City, the company had 1,300 employees and offices and plants in Endicott and Binghamton, N.Y.; Dayton, Ohio; Detroit, Mich.; Washington, D.C., and Toronto, Canada. When the diversified businesses of C-T-R proved difficult to manage, Flint turned for help to the former No. 2 executive at the National Cash Register Co., Thomas J. Watson. In 1914, Watson, age 40, joined the company as general manager. The son of Scottish immigrants, Watson had been a top salesman at NCR, but left after clashing with its autocratic leader, John Henry Patterson. However, Watson did adopt some of Patterson's more effective business tactics: generous sales incentives, an insistence on well-groomed, dark-suited salesmen and an evangelical fervor for instilling company pride and loyalty in every worker. Watson boosted company spirit with employee sports teams, family outings and a company band. He preached a positive outlook, and his favorite slogan, "THINK," became a mantra for C-T-R's employees. Watson also stressed the importance of the customer, a lasting IBM tenet. He understood that the success of the client translated into the success of his company, a belief that, years later, manifested itself in the popular adage, "Nobody was ever fired for buying from IBM." Within 11 months of joining C-T-R, Watson became its president. The company focused on providing large-scale, custom-built tabulating solutions for businesses, leaving the market for small office products to others. During Watson's first four years, revenues doubled to $2 million. He also expanded the company's operations to Europe, South America, Asia and Australia. In 1924, to reflect C-T-R's growing worldwide presence, its name was changed to International Business Machines Corp., or IBM. During the Great Depression of the 1930s, IBM managed to grow while the rest of the U.S. economy floundered. Watson took care of his employees. IBM was among the first corporations to provide group life insurance (1934), survivor benefits (1935) and paid vacations (1936). While most businesses had shut down, Watson kept his workers busy producing new machines even while demand was slack. Thanks to the resulting large inventory of equipment, IBM was ready when the Social Security Act of 1935 brought the company a landmark government contract to maintain employment records for 26 million people. It was called "the biggest accounting operation of all time," and it went so well that orders from other U.S. government departments quickly followed. The Social Security deal was secured even while IBM was at odds with another branch of the federal government. The Justice Department filed an antitrust case against IBM and Remington-Rand in 1932, alleging that the two companies, which controlled virtually the entire market for punch card machines, were illegally requiring customers to buy their punch cards. The case went to the Supreme Court, which ruled in favor of the Justice Department in 1936. In subsequent years, IBM's size and success would inspire numerous antitrust actions. A 1952 suit by the Justice Department, settled four years later, forced IBM to sell its tabulating machines -- at the time, IBM offered them only through leases -- in order to establish a competing, used-machine market. Another federal antitrust suit dragged on for thirteen years until the Justice Department concluded it was "without merit" and dropped it in 1982. IBM's competitors filed 20 antitrust actions during the 1970s. None succeeded. When World War II began, all IBM facilities were placed at the disposal of the U.S. government. IBM's product line expanded to include bombsights, rifles and engine parts -- in all, more than three dozen major ordnance items. Watson set a nominal one-percent profit on those products and used the money to establish a fund for widows and orphans of IBM war casualties. The war years also marked IBM's first steps toward computing. The Automatic Sequence Controlled Calculator, also called the Mark I, was completed in 1944 after six years of development with Harvard University. It was the first machine that could execute long computations automatically. Over 50 feet long, 8 feet high, and weighing almost 5 tons, the Mark I took less than a second to solve an addition problem, but about six seconds for multiplication and twice as long for division -- far slower than any pocket calculator today. In 1952, the company introduced the IBM 701, its first large computer based on the vacuum tube. The tubes were quicker, smaller, and more easily replaceable than the electromechanical switches in the Mark I. The 701 executed 17,000 instructions per second and was used primarily for government and research work. But vacuum tubes rapidly moved computers into business applications such as billing, payroll and inventory control. By 1959, transistors were replacing vacuum tubes. The IBM 7090, one of the first fully transistorized mainframes, could perform 229,000 calculations per second. The Air Force used the 7090 to run its Ballistic Missile Early Warning System. In 1964, American Airlines' SABRE reservations system used two 7090 mainframes to link sales desks in 65 cities. IBM led data processing in a new direction with the 1957 delivery of the IBM 305 Random Access Method of Accounting and Control (RAMAC), the first computer disk storage system. Such machines became the industry's basic storage medium for transaction processing. In less than a second, the RAMAC's "random access" arm could retrieve data stored on any of 50 spinning disks. At an IBM exhibit at the 1958 World's Fair in Brussels, the RAMAC answered world history questions in ten languages. Also in 1957, IBM introduced FORTRAN (FORmula TRANslation), a computer language based on algebra, grammar and syntax rules. It became the most widely used computer language for technical work. A new generation of IBM leadership oversaw this period of rapid technology change. After nearly four decades as IBM's chief executive, Thomas Watson passed the title of president on to his son, Thomas Watson Jr., in 1952. (Another family member, Tom Jr.'s younger brother Arthur K. Watson, built the World Trade Corporation -- IBM's foreign operations -- into such a dominating force that it had installed 90 percent of the computers in Europe by the 1960s.) Born the year his father was hired by C-T-R in 1914, Tom Watson Jr. had been heir apparent since joining IBM in 1937 as a salesman. After a five-year interruption, during which he served as a pilot in the U.S. Army Air Corps, Watson Jr. rejoined the company in 1946, and was named a vice president six months later. He became chief executive officer just six weeks before his father's death on June 19, 1956 at age 82. Just as his father saw the company's future in tabulators rather than scales and meat slicers, Tom Watson Jr. foresaw the role computers would play in business, and he pushed IBM to meet the challenge. He led the company's transformation from a medium-sized maker of tabulating equipment and typewriters to an industrial giant. During his stewardship, revenue grew from $900 million to $8 billion, and the number of employees rose from 72,500 to 270,000. On April 7, 1964, IBM introduced the System/360, the first large "family" of computers to use interchangeable software and peripheral equipment. Rather than purchase a new system when the need and budget grew, customers now could simply upgrade parts of their hardware. It was a bold departure from the monolithic, one-size-fits-all mainframe. Fortune magazine dubbed it "IBM's $5 billion gamble." System/360 offered a choice of five processors and 19 combinations of power, speed and memory. A user could operate the same magnetic tape and disk products as another user with a processor 100 times more powerful. System/360 also offered dramatic performance gains, thanks to Solid Logic Technology (SLT) -- half-inch ceramic modules containing circuitry far denser, faster and more reliable than earlier transistors. Under Tom Watson Jr., there also were innovations in marketing. In 1969, IBM changed the way it sold technology. Rather than offer hardware, services and software exclusively in packages, marketers "unbundled" the components and offered them for sale individually. Unbundling gave birth to the multibillion-dollar software and services industries. Today, IBM is the world leader in both industries. The 1970s saw the end of more than a half-century of Watson family leadership. Tom Watson Jr. stepped down as CEO in 1971. After an interim period of leadership by T. Vincent Learson, Frank T. Cary took over the company in 1973. Watson served as U.S. ambassador to the Soviet Union from 1979 to 1981 and remained a member of IBM's board of directors until 1984. He died in 1993 at the age of 79. During Cary's tenure, the computer industry expanded and wove its way into everyday life. The floppy disk, introduced in 1971, became the standard for storing personal computer data. When people shopped for groceries, IBM's supermarket checkout station, introduced in 1973, used glass prisms, lenses and a laser to read product prices. Also in 1973, bank customers began making withdrawals, transfers and other account inquiries via the IBM 3614 Consumer Transaction Facility, an early form of today's Automatic Teller Machines. John R. Opel's appointment as CEO in 1981 coincided with the beginning of a new era of computing. Thanks to the birth of the IBM Personal Computer, or PC, the IBM brand began to enter homes, small businesses and schools. Though not a spectacular machine by technological standards, the IBM PC brought together all of the most desirable features of a computer into one small machine. It offered 16 kilobytes of memory (expandable to 256 kilobytes), one or two floppy disk drives and an optional color monitor. When designing the PC, IBM for the first time contracted the production of its components to outside companies. The processor chip came from Intel, and the operating system, called DOS (Disk Operating System), came from a 32-person company called Microsoft. John F. Akers became CEO in 1985 and focused on streamlining operations and redeploying resources. IBM's typewriter, keyboard, and printer business -- the division that created the popular "Selectric" typewriter with its floating "golf ball" type element in the 1960s -- was sold to the investment firm of Clayton, Dubilier & Rice Inc. and became an independent company, Lexmark Inc. During Akers' tenure, IBM's significant investment in research produced four Nobel Prize winners in physics, achieved breakthroughs in mathematics, memory storage and telecommunications, and made great strides in expanding computing capabilities. The IBM token-ring local area network, introduced in 1985, permitted personal computer users to exchange information and share printers and files within a building or complex. With the further development of the computer, IBM laid a foundation for network computing and numerous other applications. Despite these advances, this was a period when IBM struggled. During the 1980s and early 1990s, IBM was thrown into turmoil by back-to-back revolutions. The PC revolution placed computers directly in the hands of millions of people. And then, the client/server revolution sought to link all of those PCs (the "clients") with larger computers that labored in the background (the "servers" that served data and applications to client machines). Both revolutions transformed the way customers viewed, used and bought technology. And both fundamentally rocked IBM. Businesses' purchasing decisions were put in the hands of individuals and departments -- not the places where IBM had long-standing customer relationships. Piece-part technologies took precedence over integrated solutions. The focus was on the desktop and personal productivity, not on business applications across the enterprise. By 1993, the company's annual net losses reached a record $8 billion. Cost management and streamlining became a chief concern. And IBM considered splitting its divisions into separate independent businesses. Louis V. Gerstner Jr. arrived as IBM's chairman and CEO on April 1, 1993. For the first time in the company's history IBM had found a leader from outside its ranks. Gerstner had been chairman and CEO of RJR Nabisco for four years, and had previously spent 11 years as a top executive at American Express. Gerstner brought with him a customer-oriented sensibility and the strategic-thinking expertise that he had honed through years as a management consultant at McKinsey & Co. Soon after he arrived, he had to take dramatic action to stabilize the company. These steps included rebuilding IBM's product line, continuing to shrink the workforce and making significant cost reductions. Despite mounting pressure to split IBM into separate, independent companies, Gerstner decided to keep the company together. He recognized that one of IBM's enduring strengths was its ability to provide integrated solutions for customers -- someone to represent more than piece parts or components. Splitting the company would have destroyed a unique IBM advantage. With the rise of the Internet and network computing the company experienced another dramatic shift in the industry. But this time IBM was better prepared. All the hard work IBM had done to catch up in the client/server field served the company well in the network computing era. Once again, customers were focused on integrated business solutions -- a key IBM strength that combined the company's expertise in solutions, services, products and technologies. In the fall of 1995, delivering the keynote address at the COMDEX computer industry trade show in Las Vegas, Gerstner articulated IBM's new vision -- that network computing would drive the next phase of industry growth and would be the company's overarching strategy. That year, IBM acquired Lotus Development Corp., and the next year acquired Tivoli Systems Inc. Services became the fastest growing segment of the company, with growth at more than 20 percent per year. From 1993 to 1996, the market value of the company increased by more than $50 billion. In May 1997, IBM dramatically demonstrated computing's potential with Deep Blue, a 32-node IBM RS/6000 SP computer programmed to play chess on a world class level. In a six-game match in New York, Deep Blue defeated World Chess Champion Garry Kasparov. It was the first time a computer had beaten a top-ranked chess player in tournament play, and it ignited a public debate on how close computers could come to approximating human intelligence. The scientists behind Deep Blue, however, preferred to stress more practical concerns. Deep Blue's calculating power -- it could assess 200 million chess moves per second -- had a wide range of applications in fields calling for the systematic exploration of a vast number of variables, among them forecasting weather, modeling financial data and developing new drug therapies.
An activity theory analysis of teaching goals versus student epistemological positions journal contributionposted on 20.03.2013, 15:31 by Barbara Jaworski, Carol Robinson, Janette Matthews, Tony Croft A teaching innovation for first year engineering students’ was designed to involve inquiry-based questions, an electronic graphical medium, small group activity and modifications to assessment. The use of an inquiry approach was intended to encourage students’ deeper engagement with mathematics and more conceptual understanding. Data were collected from observations of teaching, ongoing teacher reflections, student surveys, interviews and assessment outcomes. Despite evidence of success in assessments, analyses revealed fundamental differences between students’ perceptions of the teaching they experienced and the goals of the teaching team. Activity theory was used to juxtapose contradictory perceptions and highlight issues in the wider sociocultural and institutional settings of the research. - Mathematics Education Centre
Using GIS: When a Map is Worth a Thousand Words John Snow had the right idea more than 150 years ago as he investigated the cause of a cholera epidemic in London. Snow, a physician, sketched a map of his Soho neighborhood streets, water pumps and cholera deaths. When he finished, he saw a large clump of dots – each representing one person killed by cholera – near the Broad Street public water pump. Snow's celebrated map helped confirm that contaminated water in that pump had killed many Londoners who drank from it. Today, some enterprising journalists are following in Snow's footsteps with a modern twist: They're using geographic information software (GIS) to map data for stories and graphics about toxic health threats, prescription medicine abuse and EMS response times. • The Dallas Morning News used GIS for its June 2008 "Toxic Neighbors" series, which showed how dozens of hazardous chemical sites threatened the health of tens of thousands of Dallas County residents. The Morning News mapped some 900 chemical sites from local, state and federal agencies, and found the ones that were closest to schools and apartment buildings. The newspaper created an online Google Map, which showed the locations of the 52 plants that pose the most potential danger. • The Lexington Herald-Leader used GIS in 2003 for its look at narcotics abuse in eastern Kentucky. Reporters working on the stories had heard that areas with high rates of legal narcotics prescriptions often have high rates of narcotics abuse. One of the journalists obtained data from the U.S. Drug Enforcement Administration documenting the number of prescriptions by county. She then combined that with U.S. Census population data and calculated legal prescription drug rates by county. Then she put it in a map, which made it clear that four of the top seven counties for narcotics prescriptions were in eastern Kentucky. • The Arizona Daily Star also used GIS in 2003 to map emergency medical service call data. The newspaper found that more than half of the time, the city's own ambulances failed to reach the scene in less than eight minutes. Also, the mapping showed that the private ambulances that served the less developed parts of the city sometimes took up to 15 minutes to arrive. Andrew aftermath coverage boosted GIS use GIS mapping in journalism started in the early 1990s, with just a handful of journalists using the programs to help readers visualize demographic data from the 1990 Census. Then, in 1992, the Miami Herald used GIS to report on the aftermath of Hurricane Andrew. GIS helped the Herald show that shoddy construction and lax inspections, not wind speed itself, were to blame for much of the damage. The Herald won the Pulitzer Prize for Public Service the next year for its hurricane aftermath stories and inspired journalists to use GIS more widely. Since then, journalists have used GIS to report on nearly every topic: local crime, property values, campaign contributions and many others. In health reporting, journalists have used the programs to show how areas with an abundance of bars also have high rates of drunken-driven driving accidents and to identify inner-city neighborhoods with alarming rates of lead poisoning of children. But compared with other beats, GIS has been used little in health reporting. So there are plenty of opportunities for enterprising journalists. How do you start? • First, you should be comfortable working with health data points that are structured in columns and rows, because that's what GIS programs use. If you've used Microsoft Excel spreadsheets or the Access database manager, you're off to a good start. • Second, you'll need the GIS software and an Intel-based computer. Many journalists use ESRI ArcView. Others use MapInfo Professional or Maptitude. Maptitude is the least expensive commercial GIS. Journalists can also use the My Maps feature of Google Maps to create basic point maps that display locations over streets or satellite images. There are some free open source programs, but they can be difficult to use, poorly documented and not at all robust. • Third, you'll need geographic data to display maps. Most GIS software vendors provide some geographic files with the program installation discs. Chances are, you'll get things like census tracts, major roads, county boundaries and city points. You'll want to meet your local and county government GIS folks, because they can lead you to more data. Some public health departments in larger cities also have their own GIS experts, who map such things as reported lead exposures or food illness outbreaks. • Fourth, you'll need some attribute data. This is the data stored in tables that journalists already use. This can include West Nile Virus infection rates by county, blood lead levels for children by census tract, or fetal and infant mortality rates by county. In addition, you can tap into a wealth of data about HIV/AIDS, cancer, births and sexually transmitted diseases and show county- or metro-area patterns. You'll also want to download geographic data from your state GIS clearinghouse and federal agencies. Some of these web sites offer health geography, such as hospital locations and rural public health clinic sites. Keep in mind that health agencies are prohibited from releasing data if it would invade an individual's medical privacy. You may need to negotiate with the health agency to get summarized data, or data that lacks identifying details. In addition, you should be familiar with how your state laws treat GIS data. The Reporters Committee for Freedom of the Press has a useful online guide. Ask for help You might need some outside help to evaluate the data that are available. For example, cancer cluster analysis is outside the reach of the base GIS programs and most journalists. To pursue such a story, you would need more sophisticated software and expert assistance. And be aware that mapping programs are more difficult to use than spreadsheets or database managers. Many journalists have gotten up to speed by taking specialized training. I am the lead instructor for three-day GIS "boot camp" seminars offered in Columbia, Mo. by Investigative Reporters and Editors and the National Institute for Computer-Assisted Reporting, for which I also serve as academic adviser. In addition, many universities and community colleges offer GIS training sessions that can be worthwhile. And don't forget to ask your newsroom's art department or graphic artists if they can help you. You may find that there is already an expert on the software right in your building. Include these colleagues early in the planning process so they can help you to decide how to best illustrate your reporting. After you start using GIS, you'll find the number of stories you can make better are limited only by your curiosity and time. David Herzog is associate professor of newspaper journalism at the University of Missouri School of Journalism and the academic adviser to the National Institute for Computer-Assisted Reporting (NICAR).
Humans depend on and impact the physical environment in order to supply food, clothing and shelter. No resources have been tagged as aligned with this standard. Human activities alter the physical environment, both positively and negatively. The variety of physical environments within the Western Hemisphere influences human activities. Likewise, human activities modify the physical environments. Variations among physical environments within the Western Hemisphere influence human activities. Human activities also alter the physical environment.