text stringlengths 144 682k |
|---|
SS Mount Temple
From Wikipedia, the free encyclopedia
Jump to: navigation, search
SS mount temple aground at West Ironbound Island, Nova Scotia
SS Mount Temple aground at West Ironbound Island, Nova Scotia
United Kingdom
Name: SS Mount Temple
Owner: Elder Dempster's Beaver Line
Builder: Armstrong-Whitworth
Yard number: 709
Launched: 18 June 1901
Maiden voyage: 19 September 1901
Fate: Sold to US
United States
Name: SS Mount Temple
Owner: Canadian Pacific house flag.svg Canadian Pacific
Acquired: 1903
Captured: By SMS Möwe and scuttled 6 December 1916
Status: Scuttled
General characteristics
Tonnage: 8,790 GRT
Length: 485 ft (147.8 m)
Beam: 18 m
Draught: 9.3 m
Installed power: 694 nhp,
Propulsion: two triple-expansion engines, two screws
Speed: 11.5 knots (21.3 km/h)
Capacity: 1,250 3rd-class and 14 cabin-class passengers
Crew: 117
Armament: 75 mm gun during WWI
In 1916, while crossing the Atlantic with horses for the war effort and carrying a large number of newly collected dinosaur fossils (two of which were the hadrosaurs Corythosaurus), she was captured and scuttled complete with her cargo.
Early history[edit]
Mount Temple saw use in November 1901 as a transport ship during the Second Boer War .
After two successful LiverpoolQuebec City runs in 1903, the ship ran aground on West Ironbound Island, Nova Scotia on 2 December 1907. No lives were lost, but the ship was stranded until 1908, when she was refloated.
Assisting the RMS Titanic[edit]
The SS Mount Temple set out on 3 April 1912 under the command of Captain James Henry Moore, setting sail from Antwerp bound for St John's, New Brunswick, transporting over 1400 immigrants to Canada. Moore was a highly experienced ship's master with over 30 years logged at sea.[2]
On the night of 14–15 April, Mount Temple's lone Marconi wireless operator, John Durrant, was about to sign off for the evening at around 12:30 a.m. ship's time when he picked up the distress signal from RMS Titanic, which had its encounter with an iceberg. He had the message relayed to the bridge by a steward. Captain Moore had standing orders to avoid icebergs, but after receiving the distress call he decided to mount a rescue operation. He had Durrant respond to the CQD, then immediately turned his ship around and steamed north-northeast at the vessel's maximum speed of 11.5 knots (13.2 mph; 21.2 km/h) towards Titanic's reported position. He consulted with his chief engineer, John Gillet, to try to coax even more speed out of the ageing vessel. Moore worked out his own rough position as 41° 25' N, 51° 14' W, approximately 49 mi (78 km) away. Even at full speed, it would take around 4 hours to cover the distance between his ship and Titanic.[3]
Once underway, Moore had his off duty crew awakened and briefed and ordered the 20 lifeboats aboard uncovered. He had ropes and ladders readied, lifebelts prepared and posted extra lookouts to aid avoiding the icebergs reported in the area.[4] Initial progress was good but after finding his ship coming upon a large ice field at around 3:00 a.m., the ship slowed until becoming increasingly surrounded by pack ice. Around this time, Mount Temple encountered a single funnel steamer, which went unidentified and caused the ship to take evasive action. At that point, Moore sighted green lights and bright decklights approaching at around 3:20 a.m. This would later prove to be flares fired from Titanic's lifeboats and rockets from the RMS Carpathia, steaming up from the south east.[5] With the amount of ice becoming ever greater, Mount Temple heaved to around 14 miles short of the wreck site. At first light at 5:30 a.m., Moore ordered the ship to resume course at low speed, navigating the ice floe. Once clear, Moore and his crew proceeded to the last known position of Titanic, but found no traces or wreckage. Later on using dead reckoning, he calculated that the given position was incorrect, with the actual location around 8 miles further east.[6]
Arriving at the correct SOS position at around 6:30 a.m., Mount Temple combed the area and sighted Carpathia commanded by Captain Arthur Rostron, but was too late to assist. Carpathia was engaged in picking up the remaining survivors. The two ships sighted the SS Californian steaming toward them from the north shortly afterwards at 8:00 a.m.[7] After communicating with Captain Rostron, Mount Temple conducted a further search of the area but found nothing, and Moore gave the order to continue the voyage to New Brunswick. Once Mount Temple had docked, he was summoned to the American and later British inquests into the sinking.[8]
Controversy abounds concerning Moore's recollections of the Mount Temple's true speed on the evening of 14 April 1912, how far away she was from the distress position when she turned to help, and how far she was from Titanic when she stopped. Rumors that Mount Temple under Moore ignored Titanic's distress rockets abounded at the time and persist to this day. It is said that Mount Temple was the "mystery ship" seen by officers and passengers aboard the Titanic five to ten miles away, rather than the SS Californian as implied by Lord Mersey and the British Board of Trade at the British Inquiry.[9][10] These rumours are strongly contested however, and many continue to firmly believe that the Californian must have been the ship seen from the Titanic, and vice versa.
War service and loss[edit]
Crew members lost[edit]
The four crew members lost on 6 December were:
External links[edit] |
ArtsAutosBooksBusinessEducationEntertainmentFamilyFashionFoodGamesGenderHealthHolidaysHomeHubPagesPersonal FinancePetsPoliticsReligionSportsTechnologyTravel
What is a computer processor?
Updated on November 13, 2010
The Computer Processor Defined
The processor is like the human brain. The brain dictates what the computer must do or not do. In essence, the computer processor is tasked with the harmonizing the processes inside the computer. It determines which tasks should be given priority and delivers what the computer user needs. The speed by which these tasks are accomplished depends on the computer processor’s speed.
Computer Processor
The computer processor is composed of a complex network of circuits that accomplish tasks sent by the computer user.
Computer Processor Speed
As technology progresses, computer processors increase in speed. Speed is measured in terms of Megahertz or MHz. One MHz is 1 million computer instructions per second or normally expressed as cycles per second. If your computer processor has a speed of 1000 MHz, this means that your computer is running at 10,000,000 cycles per second. A lot of tasks to do in 1 second. Now, many computer processors are in terms of Gigahertz (GHz), which means 1,000 MHz.
What are the Major Computer Processors in the Market?
There are 3 major manufacturers of computer processors at this time. These are Intel Pentium, AMD Athlon, and Intel Celeron. These processors come in different versions and their speeds, as well as heat tolerance, vary. The faster the processor, the greater amount of heat it generates. This is a great challenge to manufacturers as being better able to manage the heat generated by the computer processors also means better performance.
The contending parties in terms of market share are Intel and AMD. Intel is a veteran in computer processor manufacturing but AMD is keeping up and may at this point compare and even surpass the more popular Intel computer processors.
Multi-Core Computer Processors
Computer processors used to have only one core. Now, they are available as dual core, triple core, or quad core. Dual core is equivalent two running two computer processor units, triple core is equivalent to three computer processor units, and quad core is similar to running four computer processor units. In the future, there may be more cores in the computer processor to achieve greater speed and do a multitude of tasks.
0 of 8192 characters used
Post Comment
• profile image 6 years ago from bear, de, 19701
Thanks for providing information on computer processor and what you had said is exactly correct today many people prefer to dual core processors and this hub is very clear and easy to understand regarding processors.
• profile image
hk 3 years ago
• profile image
james 3 years ago
your information is highly appreciated.....and please keep up the good work......
• profile image
matthewq 3 years ago
i like men
• profile image
dsad 3 years ago
anwyl pobol y byd
• profile image
willi head 3 years ago
hi bently :L#
• profile image
curtis 3 years ago
my mum let me smoke when i was 3
• profile image
rhyl united 3 years ago
Curtis join us, us chavs aren't hard without u
• profile image
Bently 3 years ago
Matthew stop lickig me off our gettig on my nerfs
• profile image
matthew 3 years ago
sorry bently but you dick is just to large to look away from its nearly an insh much bigger than mine
• profile image
kyrtizq muxun 3 years ago
• profile image
will head 3 years ago
finger dem
Click to Rate This Article |
Browse / Computers / Computer Networks / 2nd Edition
Computer Networks 9781558605145
Computer Networks | 2nd Edition
A Systems Approach
ISBN-10: 1558605142
ISBN-13: 9781558605145
PUBLISHER: Elsevier Science & Technology Books
Also available at
Product Description: NEW EDITION NOW AVAILABLE! ISBN 1-55860-832-X Networking technologies and practices are constantly evolving. Equip your students with an understanding that helps them keep pace with Internet time. In this carefully focused revision of the best-selling first edition, authors Peterson and Davie reiterate their commitment to a systems-oriented approach to networking instruction. Focusing on the why of network designnot just the specifications comprising today's systems but how key technologies and protocols actually work in the real world to solve specific problemsthey promote an enduring, practical understanding of networks and their building blocks. The second edition incorporates coverage of Quality of Service issues, mobile and wireless networks, VPNs, and much more. Over 100 new exercises help users consolidate and expand their knowledge. No other textbook offers a more solid grounding for aspiring network professionals. Computer Networks: A Systems Approach gives your students the knowledge and perspective they need and gives you the tools you need to maximize their learning experience: * Unparalleled instruction from an expert team of authors. The authors bring over 30 years of experience in networking research, development and teaching to the task of describing the principles and practical implementation of computer networks. Both have played key roles in defining and implementing many of the protocols discussed inside. * Cutting-edge coverage. The second edition has been thoroughly updated to cover the most recent advances in networking, including: * a new chapter on security techniques - PGP, IPSEC, secure sockets and firewalls * a new chapter on application layer protocols - SMTP, HTTP, SNMP, DNS and RTP * new material on wireless and mobile technology- spread spectrum techniques and 802.11 * new section on building VPN's on top of the public Internet * Expanded treatment of key issues. Topics such as Internet routing, Quality of Service, congestion control, ATM, compression and multimedia communications now delve deeper to reflect changes that have taken place over the past four years. * Effective pedagogical components. Alongside the authors' clear explanations and insights, you'll find pedagogical components that significantly enhance students' understanding: * Problem statementsthe practical design challenges met by the techniques covered in each chapter * Shaded sidebars explorations of advanced topics * Highlighted summary paragraphs distillations of key network design principles * Open Issuesguided discussions of controversial networking issues * Further readingpointers to definitive papers related to each chapter's coverage * Completely revamped end-of-chapter exercises The second edition offers over 100 new end-of-chapter exercises, the result of a substantial editing and development effort by a seasoned networking instructor, Peter Dordal of Loyola University. * The optimal pedagogical approach. Encyclopedic and "layered" approaches cover required material but leave critical questions unanswered. Peterson and Davie focus on systemshow they interweave technology and technique to meet practical needs. Adding layer-focused considerations where necessary, the authors teach students why networks are designed as they are and cultivate the skills needed to build the networks of the future. * Real-world implementation examples. New to this edition, operating system-independent C code is used with pseudocode to illustrate protocol implementation throughout. The first edition's x-kernel examples continue to be available online.
Additional Details
PAGES: 749
CATEGORY: Computers
21 Day Unconditional Guarantee
any book, any reason
Rent This Book Now:
Price guaranteed for 45:00 longer
Due May 15 $210.61
130 days (due Aug 6) $245.07
85 days (due Jun 22) $223.02
55 days (due May 23) $213.22
Select Your Own Date
Buy this book used:
List Price: $89.00
Your Savings:
Total Price:
REVIEWS for Computer Networks 2nd Edition
Select a star rating
1. How do textbook rentals work?
2. Is renting a textbook better than purchasing it?
3. How do I track my order?
4. How do I return my books?
5. Can I write or highlight in my book?
6. How much money can renting my books save?
9. What if I don’t return my rental?
10. Can I purchase my book after I rent it?
Textbook Rentals
Why Choose Us?
Make A Difference
|
thin smile
Definition of thin smile
1. : a weak smile that does not seem sincere
Word by Word Definitions
1. : having little extent from one surface to its opposite
: measuring little in cross section or diameter
: not dense in arrangement or distribution
1. : to make thin or thinner:
: to reduce in thickness or depth : attenuate
: to make less dense or viscous
1. : in a thin manner : thinly
1. : to have, produce, or exhibit a smile
: to look or regard with amusement or ridicule
: to bestow approval
: a pleasant or encouraging appearance
Seen and Heard
What made you want to look up thin smile? Please tell us where you read or heard it (including the quote, if possible).
a favoring of the simplest explanation
Get Word of the Day daily email! |
It’s only a movie.
That’s what you kept trying to remind yourself the first time you saw a classic vampire movie: It’s all special effects. Vampires aren’t real. Transylvania is miles away. It’s only a movie. That helped soothe the creepiness factor but you still kept your turtleneck sweater handy, fang you very much.
From start to finish, Irving created lavishly dramatic spectacles to delight London theater-goers, paying strict attention to detail both on-stage and off. Cast and crew called him “The Governor” and nobody contradicted him — except Bram Stoker.
It was a shocking surprise to Stoker, then, when literary sycophants gained cheeky access to Irving’s inner circle. Stoker grew angry: He’d had an idea for a novel, and Irving’s new friends were less-than-complimentary.
Still resolute, Stoker collected information and made notes, tweaking and creating his masterpiece. Vampire lore had been around for centuries by then, and he was careful to craft details for bits of mythology. Dracula was a well-rounded, thrilling monster. So on whom did Stoker base his vampire?
Steinmeyer says that the answer is complicated. Surely, there’s a bit of Irving in the Count. Stoker may have personally known an infamous murderer, and his research gave the vampire a name and loose historical basis.
Add a bit of autobiography, influence from a randy American poet and a scandalous playwright, and Stoker had a hit.
Think of all the vampires you’ve known and loved: cartoons, romances, toys, movies, (good and bad), even breakfast cereal. Now consider this: Stoker’s creature appears in a mere 62 pages of the original novel. So how did Dracula seize our imaginations so strongly?
Among other things, author Jim Steinmeyer answers that question.
Along the way, he busts myths and gives his readers menace, jealousy, and mystery, as well as a wonderful sense of life for Victoria literati.
(1) comment
No movie creature has ever scared me as Dracula has.
Is your Dracula in the book, as spooky and scary as Christopher Lee in the old Hammer films?
You always wanted to yell at the stranded people at the inn, who decided to go on to Dracula's castle IN THE DARK!! - No! Don't go, stay away! He wants your blood.[smile]
Even now if a Dracula movie is on staring Christopher Lee I will not watch it if I'm alone.
Your book sounds as if it would be just as spooky as the old Hammer films,
You have done a good job describing it.
Welcome to the discussion.
Be Truthful. Don't knowingly lie about anyone or anything. |
High Fructose Corn Syrup Should be avoided at all Cost | Kodjoworkout
High Fructose Corn Syrup Should be avoided at all Cost
High Fructose Corn SyrupHigh fructose corn syrup is also known as HFCS, or corn sugar. It is one of the most common artificial sweeteners contained in sodas, and most of the other flavored drinks. It is also added to most foods. For these reasons, you need to understand the health implications of consuming high fructose corn syrup.
What is high fructose corn syrup, or corn sugar?
I will explain fructose first, and then corn syrup next. Fructose is a simple sugar that is extracted from many plants. It is more soluble than glucose. Glucose is another simple sugar. Glucose and fructose are generally combined to make sucrose, which is the table sugar you use.
Corn syrup is derived from corn starch, and is a glucose-laden syrup. Corn syrup, in its natural state, does not contain any fructose. However, in the late 1950’s, scientists discovered a way to transform glucose into fructose. This resulted into the mass production of corn sugar. Since the late 1980’s, corn sugar has replaced table sugar and honey, in almost every single product. However, several studies have concluded that HFCS is linked to a number of health concerns.
High fructose corn syrup should be avoided at all cost
Here are the main reasons you should avoid this product at all costs:
HFCS causes weight gain:
Research has proven that laboratory rats that are fed HFCS, gain 3 times more weight than those fed with fruit-derived sugar.
HFCS causes diabetes:
When you consume HFCS over many years, you are at a very high risk of contracting Type-2 diabetes.
High fructose corn syrup causes hypertension:
HFCS not only makes you fat, it also makes your heart fat, causing your triglyceride and bad cholesterol levels to increase significantly. High levels of bad cholesterol is a recipe for hypertension.
HFCS damages the liver:
Your liver processes HFCS, just like it is responsible for processing any other food. Unfortunately, the processing of HFCS ends up scarring the liver in the long run.
HFCS contains mercury:
Studies found that corn sugar unfortunately contains mercury; and mercury is very toxic for the body.
If you want to avoid the health issues that stem from consuming processed corn sugar, or HFCS, you should stay away from artificially flavored drinks, and artificially sweetened foods. For example, drink water instead of soda. In addition, you should select your cereals very carefully, as most of them contain high fructose corn syrup. Learn to enjoy vegetables, organic fruits, whole-grain, and low fat yogurt. Avoid candies and pastries.
1 Comment |
Religion Wiki
34,323pages on
this wiki
Add New Page
Talk0 Share
Ammonihah is a city mentioned in the Book of Mormon. According to the book, the city was founded by an otherwise unknown man named Ammonihah.[1] The inhabitants of Ammonihah were followers of the religion of Nehor.[2]
After Alma the Younger had visited several cities, setting the church in order and preaching, he went to Ammonihah to do the same, but was rejected by the people. As he left the city to preach elsewhere, he saw an angel[3] (the same angel who had confronted him prior to his conversion[4]) and was instructed to return to Ammonihah and preach that the inhabitants of the city would be destroyed unless they repented because the Lord declared: "they do study at this time that they may destroy the liberty of thy people, (for thus saith the Lord) which is contrary to the statutes, and judgments, and commandments which he has given unto his people." On returning, Alma met a resident of Ammonihah named Amulek,[5] who gave Alma food and lodging and joined him in his efforts to preach.
Engaging in a verbal confrontation with a lawyer named Zeezrom, Amulek was able to discern his thoughts by the Power of the Holy Ghost and confound him. Alma then stepped forward and began warning the people that they would be destroyed if they did not repent and believe in the Son of God, even Jesus Christ, and obey his commandments. Many of the inhabitants of Ammonihah were converted through the preaching of Alma and Amulek,[6] but their message was rejected by most of the people. The leaders, lawyers, and judges of the city of Ammonihah then brought Alma and Amulek before their chief judge, accusing them of denouncing their laws and of teaching that God's Son would come among the people but would not save them.[7]
Zeezrom, now convinced of Alma and Amulek's righteousness, started speaking in their defense, but he and the other men who had been converted were driven out of the city,[8] and their wives and children were burnt alive along with their scriptures.[9] Forced to watch, Alma and Amulek were threatened with a similar fate, then imprisoned. After several days of mistreatment in prison, Alma and Amulek were confronted again by the lawyers, teachers, and judges of Ammonihah and challenged to show the power of God.[10] Alma called on God, he and Amulek were freed from their bonds, and the prison tumbled down, killing the city leaders but leaving Alma and Amulek unharmed.[11] When the people rushed to the prison to see what had happened and saw Alma and Amulek standing amidst the ruins, they fled in fear.[12]
Alma and Amulek went to the nearby land of Sidom[13] and found the men who had been expelled from Ammonihah for their belief — including Zeezrom, whom Alma healed of a fever and baptized.[14] A few months later, an invading Lamanite army destroyed the city of Ammonihah and killed all its inhabitants.[15] The dead were piled in a heap and covered with earth; on account of its stench, the site became known as Desolation of Nehors[16] and remained uninhabited for many years.
Ad blocker interference detected!
|
Considerations for Data Analysis
The choices you make while analyzing your data can also contribute to effectively managing your research data:
• Document your steps. Consider the software you use for analysis, and whether those applications automatically generate information about your data files (metadata) and process steps (such as log files). Keeping track of your steps can save you time when you want to recreate your work, or share your methodology with others.
• Boost your skills. If you’re new to using an application, or just want to learn more about software you use regularly, look for training opportunities. Emory University provides institutional access to for a wide variety of online courses you can take on your own time.
• Keep your data safe. Describe your data as you capture it, organize your files, and make smart choices about where you store your data. Since some software programs produce files that are proprietary and can only be opened in their applications, consider saving data in formats that can be opened by different software programs.
Learn more about data analysis resources at Emory. |
How to Interpret CPAP Data
Written by scott knickelbine
• Share
• Tweet
• Share
• Email
Most newer CPAP machines record a great deal of data about the sleep patterns of the patients who use them. This data is often saved on a smart card that the patient can take along to the doctor or respiratory therapist's office; some machines report the data directly via wireless modem. While each CPAP model records a slightly different set of information, there are a few basic pieces of data that show up in all reports. This information helps therapists tell whether the patient is complying with treatment and whether the CPAP machine or mask needs to be adjusted.
Skill level:
Things you need
• CPAP data report
Show MoreHide
1. 1
Look at compliance data to judge whether the patient is actually following his or her treatment regimen. Most reports will show both total hours of use and total days of use for a given period. If total days of use is smaller than the prescribed period length, it indicates the patient is skipping CPAP therapy; if the total number of hours of use divided by days of use is six or fewer, it may indicate the patient is removing his mask at some point during the night.
2. 2
Review apnoea/hypopnea data to judge effectiveness of treatment. The apnoea/hypopnea index, or AHI, is an hourly average of how many times the patient stops breathing or is not inhaling fully. An AHI of greater than 5 is an indication that the treatment is not being effective; either the CPAP machine is not providing adequate pressure or the mask is not properly fitted. The report may also show the average length of apnoea/hypopnea episodes.
3. 3
Look at pressure data to indicate how much pressure the patient requires to prevent apnoea episodes. The most recent CPAP designs adapt the amount pressure the patient receives according to the breathing rate data it's measuring. This pressure is usually expressed both as an average pressure (in cm/H2O) and as a percentile pressure. The percentile pressure is the pressure at which you spent that per cent of time at or below. For instance, if the report shows a 90 percentile pressure of 11, it means that 90 per cent of the time you were using your CPAP, you were receiving 11cm/H2O of pressure or less. If these pressures are close to the maximum pressure at which the machine is set, it can be an indication that the patient requires a higher maximum pressure.
4. 4
Examine leakage data to determine proper mask fit. A certain amount of air escaping from a CPAP mask is necessary to prevent the patient from rebreathing the same air; however, too much leakage indicates the patient may not be receiving the correct air pressure from the mask. The report typically shows the leakage data in l/min, as both an average and as a percentile. It may also show how much time was spend with excessive leakage from the mask.
Tips and warnings
• Your respiratory therapist can usually print out a data report that is specifically designed for patients. It provides definitions of unfamiliar terms.
• If you don't understand information in your report, discuss it with your respiratory therapist. If the report shows abnormal data, the only way to determine what's happening is by discussing your sleep experience with your therapist.
• While CPAP data can provide important insights therapy effectiveness and compliance, it does not provide sufficient information to change a diagnosis or prescribed pressure. The physician or therapist needs to discuss the data with the patient to be able to fully interpret it.
Don't Miss
• All types
• Articles
• Slideshows
• Videos
• Most relevant
• Most popular
• Most recent
No articles available
No slideshows available
No videos available
|
• Join over 1.2 million students every month
• Accelerate your learning by 29%
• Unlimited access from just £6.99 per month
What impact did war have on the French Revolution 1789-1799?
Extracts from this document...
What impact did war have on the French Revolution 1789-1799? Over the period from 1789 to 1799, war had a huge impact on the course and aims of the Revolution. In 1789 to 1791, before war even broke out, fear of foreign intervention and counter-revolution was rife. This was evident in the measures taken against the �migr�s and non-juring Church. The desertions of countless army generals amongst the �migr�s also showed that there was a fear of France being conquered by another European power, and the King's Flight to Varennes displayed his fear of opposition, be it internal or external. Essentially, the fear of war and internal and external enemies influenced the Revolution and the activity of the king before war even broke out. However, when war did break out 20 April 1792, it marked a dramatic turn in the Revolution. From this moment onwards, war was the biggest conditioner for the course of the Revolution. The outbreak of war led to the fear of counter-revolutionaries inside France who did not agree with it, leading to killings and arrests from 1792 onwards to dispose of 'traitors'. New forces began to emerge as support for a Republic began to grow, due to huge military defeats and desertions that caused the King to be suspected of being in league with the Austrians. ...read more.
The British were defeated at Hondschoote in the same month and the Austrians at Wattignies in October. The Terror was a consequence of the war going badly, which had led to revolt and economic problems. Now that the federal revolts had been crushed, food supplies were moving into towns and cities due to requisitioning and the value of the assignat was rising along with the victories in the war, people wanted a relaxation of the Terror. This led to the fall of Robespierre and the Jacobins, who continued to press for the Terror despite it being unnecessary and too dictatorial for the people of France, and the measures they took in order to preserve the Terror which lost them the support of the people of Paris, especially the sans-culottes who again lost influence and power and were left weakened from Robespierre shutting down their organisations, such as the Cordeliers Club. The Terror and its leaders were then overthrown in the coup of Thermidor due to its dictatorial, ruthless methods. The events of the coup of Thermidor effectively meant the rejection of government by Terror. The Terror was dead although its violence would continue, as peace had not been made with the First Coalition yet. Further on, after the Thermadorian reaction had taken place and the Directory introduced as the Constitution of Year III, war led the Directory to seem to be successful and prestigious due to its huge successes in foreign policy. ...read more.
These involved requisitioning to prevent further food shortages and conscription. These policies were met with huge federal resistance, a good example being the Vendee Rebellion. In order to crush this resistance, organisations such as the representatives-on-mission and the Revolutionary Tribunals were created to restore order in France. However, when order was restored and the war was going well in 1794, people began to question the necessity of the Terror. This led to the fall of the Jacobins in 1794, and the creation of the Directory in 1795. The Directory's success relied mainly upon the war - when the foreign policy was hugely successful, like Napoleon's campaign in Italy, the Directory did well; but when the wars began to go worse, like the Battle of the Nile, the Directory could not cope, due to its huge reliance upon the army and the financial benefits foreign conquests helped with. War was effectively a huge factor in the failure of the Directory in 1799. This demonstrates the huge part war had to play in the French Revolution. Almost everything that happened in France after 1792 was caused, or affected by the war. The war destroyed the consensus of 1789 and led directly to the fall of the monarchy, civil war and the Terror. ...read more.
The above preview is unformatted text
Found what you're looking for?
• Start learning 29% faster today
• 150,000+ documents available
• Just £6.99 a month
Not the one? Search for your essay title...
• Join over 1.2 million students every month
• Accelerate your learning by 29%
• Unlimited access from just £6.99 per month
See related essaysSee related essays
1. Marked by a teacher
Major Causes of French Revolution
4 star(s)
In return for praying for the King and the people, the first estate were allowed privileges. Its members didn't have to pay taille or main tax; they were not called for military service and if they broke the law they were tried in their own courts.
2. How did the National Constituent Assembly reform France between 1789 and 1791?
The 'Civil Constitution of the Clergy' brought the Church and local administration together. Each of the 83 'd�partements' would have a bishop and the Clergy were to be elected in future. Also, the Pope would no longer have any say in accepting or rejecting bishops.
The fall of bastille, the Babeuf Plot, the Thermidorian Reaction...they all became legends and as Thomson puts it "French Revolution could be considered the important event of Europe till 1914". And yet, while appreciating the democratic ideal of the revolution, one must not forget that the democracy was only the result, and not the original objective.
These historians see the revolution as part of the ongoing class struggle in French society and the eventual triumph of the working classes over the autocratic ruler; the dominant position amongst historians during the twentieth century. Georges Lefebvre most clearly expressed this view in his work 'Les Paysans du Nord
This strengthened the difference between the social classes; since it was mostly only the wealthy citizens' children that received a good education, the chance of achieving a good career further on in life would only be limited to these same children.
2. Why did the French Revolution end in 1799?
By 1799 the people of France had had enough of the Revolution and saw that it was an endless circle of elaborate constitutions and unkept promises. The recent terror had left the French citizens of all classes in a state of fear and pain, searching for peace and a long awaited freedom.
After William I met Beneditti he sent a telegram from Ems to Bismarck. Bismarck, prompted by his own political agenda of unification, manipulated the telegram in order to make William sound contentious and to make war unavoidably, after he
2. How important were the financial problems of the French Crown in bringing about the ...
as a major contributor to the financial issues. The peasantry lost around 10% of their income to tax along with extra expenditures such as feudal dues, rent and the effects of the “corvée” (losing workers would mean a loss in income from harvest for most peasant families).
• Over 160,000 pieces
of student written work
• Annotated by
experienced teachers
• Ideas and feedback to
improve your own work |
Popular Astronomy
Share to Facebook Share to Twitter Stumble It More...
Guide to T Cephei
A circumpolar Mira type variable that can be followed over most of its brightness range using binoculars.
The brightness changes don't repeat exactly from one cycle to the next. As can be also seen in the above light curve, the brightness doesn't rise and fall at a constant rate - it is quite normal to T Cep to "pause" for several weeks close to mag 8.0 during its rise to maximum (the six week "pause" in early 2015 was unusually long! Another long pause occurred during the rise to the May 2016 maximum).
T Cephei is a red giant star. The brightness changes seen are primarily due to pulsations in the star's outer layers. However, these outer layers are also sufficiently "cool" for it to be possible for some very simple molecules to form when the expansion phase causes further cooling and to then break up when the subsequent contraction causes the temperature to rise again.
Extreme brightness range 5.4 - 11.0
More typical range 6.0 - 10.5
Period of variation 389 days (nearly 13 months)
Frequency of observation Worth checking a few times per month
Observe using 40mm or 50mm binoculars will suffice when the star is near maximum, but 50-80mm binoculars will be required when it gets fainter and a telescope will be needed at minimum
Visibility Can be observed all year round
Upcoming maxima mid June 2017, mid July 2018
Here are three charts that show the location of T Cephei. All have north at the top.
Comparison stars are marked with their magnitudes with the decimal points removed (so, for example, '54' labels a star of magnitude 5.4)
This next chart, approx 8 degrees by 6 degrees, covers the brighter part of the binocular range:
This final chart covers the fainter binocular range and telescopic range: |
Orthodox Christianity: Was Jesus Human or Divine?
Only available on StudyMode
• Download(s) : 79
• Published : April 25, 2011
Open Document
Text Preview
Jesus: human or divine?
Was he just a human being or was he more than a human being? The first approach, Adoptionism, believed that Jesus was basically a human being who was anointed by the Holy Spirit in the same way as the prophets of the Old Testament, but to a greater extent. The next idea, Docetism, argued that Jesus Christ was completely divine, but appeared also to be human. Christ presented himself to humanity as one who shared their condition. The last approach which could very well be the most important failed attempt to the identity of Jesus was Arianism. This was the idea that Jesus Christ was not divine, but was supreme among God’s creatures (McGrath).
Orthodox Christianity is one of the oldest religions in the world. This religion in some ways is very mysterious and unlike other religions. It is also very much like some religions, such as Roman Catholics. They believe in one God, which is divided up into the trinity. The Trinity: the Father, the Son, and the Holy Spirit all make up what they believe to be God. They believe in the seven sacraments: Baptism, Chrismation, Holy Communion, Holy Confession, Ordination, Marriage, and Holy Unction. Orthodox Christians believe that during the Eucharist believers partake mystically of Christ's body and blood and through it receive his life and strength. They believe each member must experience truth personally rather than focusing on religious truth. They believe that one must receive God’s grace in order to receive forgiveness of the transgression and freedom from bondage and punishment. Salvation is the process of reestablishing man’s communion with God. They see that afterlife is like deification, not as in humans become gods but that humans join fully with God's divine life (Eastern).
Orthodox Christians believe that Jesus Christ is God in the form of human flesh. They think that Jesus is not part human and part divine, but fully human and fully divine. This means that Jesus has...
tracking img |
London Shard
Digital Camera
The Shard located in London England
The Shard is the tallest building in Europe. This irregular pyramidal structure, clad entirely in glass, consists of ninety-five stories and stands at a height of 1,016 feet. The building was finished in July of 2012 and was designed by the architect Renzo Piano. He designed the building to be extremely environmentally friendly and economically sustainable. The building even was given a BREEAM Excellent rating.
Piano designed the Shard to use renewable natural resources to reduce the depleting of nonrenewable resources. For example, the 11,000 glass panels that make up the outside of the structure are designed to reduce heat from the sun by 95%. As these panels approach the top, they form a spire. The final nine levels of the building consist of the spire and are open to the elements which allow the building to breathe. On the inside of the glass panels, there is also a ventilated inner cavity housing a solar-control blind, and a double-glazed unit. The glazing on the panels reduces infrared radiation. An intelligent blind control system is used which tracks the position and intensity of the sun to deploy the blinds only when required. These aspects, that were incorporated into the design, minimize the use of power for air conditioning which requires the use of nonrenewable natural resources. The panes of glass also allow for natural illumination which reduces the use of electricity to light the building. Another way that the Shard uses renewable natural resources to help the environment is that it has two natural winter gardens per floor that are used to naturally ventilate the workplaces with clean air instead of using electricity and filters.
The Shard is economically sustainable by having its own power plant and through the functions of other high technology systems. Installing its own power plant on site at the Shard was very expensive, however the extra expense will be recovered in due time. By the plant using natural gas, they will save money. Today natural gas only costs $2.50-$4.00 per thousand square feet whereas oil is $3.64 per gallon (Gripper). Even though oil burners burn hotter, they are less efficient than natural gas burners, so when comparing prices and efficiencies the natural gas is more cost effective. Also, when the natural gas is converted to electricity it creates heat. The designers of the Shard cleverly installed a heat exchanger to transfer heat from the power generation system to the building heating system so not as much money must be spent heating the building. Another benefit of the power plant on site is that they are able to install their own unique sophisticated technology that increases the efficiency within the plant. This saves money because the plant does not require as much utilities.
These are only some of the reasons of what makes the Shard a green building, but they some of the most important and also ideas others should take into consideration when designing green buildings.
2 thoughts on “London Shard
1. Pingback: Like the opening credits of Mary Poppins – The Shard | It's Life Jim...
2. Pingback: Travel Knack / Like the opening credits of Mary Poppins – The Shard, London
Leave a Reply
You are commenting using your account. Log Out / Change )
Twitter picture
Facebook photo
Google+ photo
Connecting to %s |
coding java programming
How is competitive programming different from real-life programming?
Adapted from Anthony Moh:
In competitive programming, you just have to choose the first algorithm that comes to mind that you think will work and then code it. The aim while coding is to just get it down and make minimal mistakes. You do not have to worry about maintenance documenting etc. No need to to think much about how to name the variables, split the code into functions and so on. Also, competitive coding is short. You will not have to spend more than a few days on it. And most of your time is spent coding.
While competitive programming gives you important knowledge of algorithms and how to implement them, you will find that in most jobs, coders just use libraries of algorithms. So, the most useful part of competitive coding is learning which algorithm to use for the problem at hand. At office, you will spend only a small amount of your time writing code. Most of your time is spent in deciding what to code, testing, documenting and …
You are in thprogramming bug jokee jungle. You have a pocket-knife. Someone asks you to kill a mountain lion. Anyone but a programmer would be asking “WTF is a MOUNTAIN lion doing in a JUNGLE?!”, but that’s not what you have been trained to do as a programmer. You are here to solve problems, not to question them.
Years of training has taught you well. You use your knife to sharpen a stick. You cut vines to lash sharp stones on one end. Maybe you’re from a top university, and you’ve learned to extract essential ingredients from plant and insect life around you to fashion a poison to tip your weapon with.
Convinced that you have an effective and efficient way to kill the lion, you set forth to accomplish your task. Maybe your stick is too short, or your poisons don’t work. It’s okay – you live to refine your method and try again another day.
Then someone figures out a way to fashion a low-grade explosive from harvesting chemicals in the jungle. Your method of fashioning a spear to kill the lion is now far from the best way to accomplish your task. Nevertheless, it’s still a simple way, and will continue to be taught in schools. Every lion-killer will be taught how to build his tools from scratch.
That’s “real-life” programming.
Soon, you learn that if you kill a squirrel, sometimes the judge thinks it’s a lion and you’re good to go.
A more experienced programmer just keeps stabbing the lion and hopes that the lion dies in time. Soon, you learn that there are certain spots on a lion that are damage immune. You learn to not even bother stabbing those spots. Sometimes, the lion doesn’t expose those spots, so you get really good at killing squirrels.
First, you must learn how to find the lion’s critical point and kill it in one swift stroke.
Second, you must learn how to be so handy with your knife that you can fashion a sharp stick in 1 minute, and spend the next minute stabbing the lion to death.
But never ever will you be able to have enough time to fashion an explosive to take the lion out.
Leave a Reply
You are commenting using your account. Log Out / Change )
Twitter picture
Facebook photo
Google+ photo
Connecting to %s |
Doomed galactic smash-up: Milky Way to crash with Andromeda
Doomed galactic smash-up: Milky Way to crash with Andromeda
Our entire Milky Way galaxy is set to crash into the neighboring Andromeda. The collision would create an entirely new hybrid galaxy and dramatically change the view of the night sky from Earth.
The astronomers and researchers are now sure the galaxy that contains the Earth is ceased to exist.
Years of observations from the Hubble Space Telescope indicate the Andromeda galaxy is coming towards us at a speed of about 400,000 kilometers per hour (250,000 miles per hour) and a head-on hit is imminent.
Four billion years from now. It will be complete by about six billion years from now.
"The Andromeda galaxy is heading straight in our direction," Roeland van der Marel, an astronomer with the Space Telescope Science Institute in Baltimore, which operates Hubble, told the media. "The galaxies will collide, and they will merge together to form one new galaxy."
The new galaxy will likely be of an elliptical shape rather than the barred spiral Milky Way. And will completely change our night sky, with Andromeda suddenly dominating.
So Earth, they say, should easily survive what will be a 1.9 million kilometer per hour (1.2 million mile per hour) galactic merger. Even at that speed, the event would take about 2 billion years.
"It's like a bad car crash in galaxy-land," van der Marel describes it.
Both the Milky Way and Andromeda are about the same size and same age — 10 billion years old. At times they have been considered virtual twins, so it is hard to tell which of the galaxies will get the worst of the collision, van der Marel said.
|
Sunday, March 28, 2010
The Private Eye
A friend of mine recently started a workshop based on The Private Eye: "5x" Looking/Thinking by Analogy and I am hooked! The process is very simple yet quite powerful. The product is designed for the school market but really could flourish in homeschools.
The idea is this: look at a natural object using a 5x loupe. Just that alone in fascinating because objects look so different magnified, revealing detail one never imagined was there. It also focuses a child's attention on that small piece of the world. Next, make a list of analogies for the object thinking about of what other things it reminds you. Similes and metaphors work well here; try to come up with a list of 5 to 10 of them.
After examining the object for awhile, draw what you see. This is ideal material for any nature notebook, of course. We draw a circle to simulate the loupe and then draw the object in it.
Finally, kids ask themselves why something looks the way it does, why it has the structure it does. Because so often form follows function, this line of questioning is really a scientific investigation. The first steps are to ask a question and form a hypothesis. Most often the answers will come from a book or Internet search, but as kids get older they may be inspired to carry out some investigations themselves.
One can easily imagine applying this simple process endlessly throughout years of nature study. Yet what I found even more intriguing are the suggestions and projects listed in the book to extend the analogies and investigations, especially for writing. By thinking in analogies, the natural objects more easily become the subjects of poetry, short stories, expressive journaling, as well research. The book gives ideas "across the curriculum" as one would expect from a school-based curriculum. I find it to be a fascinating way to look, think, and write about nature.
Fiddler said...
With what ages do you think this curriculum would be successful, Kris? I have a 12 y.o. who will not draw (in front of others, anyway, or for an assignment), and an 8 y.o. who l-o-v-e-s anything called art.
MiaZagora said...
Thank you, so much, for posting this! I have heard similar concepts used for nature study, but I had not thought about weaving in the bit about metaphor, simile, or analogies!
::carol:: said...
How cool!! this looks like somethingmy kiddos would enjoy, off to order!!
Jimmie said...
I never thought of using a loupe. I've used a magnifying glass before, though.
Kris said...
We split our rather large group into ages 7 to 10, and 11+
For the youngers we just have them do the basic sequence of view, make analogies, draw, and think why. The olders, after they do those basic steps, are encouraged to do some creative writing.
I would consider having your older child draw anyway only because it will help him/her look at detail, but not put any emphasis on it. Maybe your child will be more interested in the creative writing or research part of it and so I would put the emphasis there.
I find the beauty of the program is to think differently about objects to enhance your art, writing, and/or research. In that sense art is only one area that can be minimized if it is not your child's area of interest.
Kris said...
Jimmie, I found that looking through the loupe better focuses our attention on the object because of the narrow scope of view compared to a magnifying glass, much like a microscope. When they draw a circle in which to draw what they see I encourage them to fill up the circle just like the object fills the view in the loupe.
Paula said...
What a simple and great Aristotelian idea for nature study.I will check if the Book Depository has this book.
Mama Squirrel said...
I had this book and gave it away because a) I couldn't find anywhere to buy 5x loupes and b) I couldn't figure out how to use the book--like Drawing with Children, it gives you an overall idea but it's a little hard to figure out where the "lessons" are. In a way it's good that it's very open-ended, but I could have used a few more specific suggestions--that is, if we had been able to find the loupes in the first place.
Kris said...
If you follow the top link it goes to The Private Eye website and they sell 5x loupes; if you buy two you can nestle them to get 25x magnification.
The amount of mag doesn't matter, IMHO; I already have a 10x loupe I picked up (at Amazon or Home Science Tools?) It's having the narrow field of view that encourages looking at the details.
You are right in that the books does not have lesson plans; rather it has lists of suggestions by subject to expand on the basic 4 step process. It is not an out-of-the-box program.
I also see it as part habit training in observation--the bases of Drawing With Children,too. My mother happens to be using that book with them for art. Funny how when it relates to science and nature I get it but when it was art I didn't.
{ jamie } said...
That sounds fun! Thanks for sharing the idea! |
Friday, July 20, 2012
Point-and-Shoot Game Creation
Were you ever the type to pick up a newspaper just so you could complete the crossword puzzle? Apparently this was a point of entry into the paper for many people, but the concept didn't get translated very well when journalism went digital. In his Games for Change Festival talk Ian Bogost suggests that newsgames might be a new way to draw people in.
But here's the problem: making games is kind of hard. And newsgames — “any application of journalism in video game form” — have to be made quickly so they are relevant to current events. Should we start training journalists in computer programming? That might be useful, but it's not a short term solution.
Instead, Bogost has created Game-O-Matic, a tool that can quickly create simply microgames based on a concept map anyone can put together. This allows journalists to work at the ideation level. It's kind of like being able to take a snapshot of the world with a point-and-shoot camera; it's a point-and-shoot game maker.
The concept is really interesting because it distills games down to their most basic pieces. If the idea ends up working well, I could see it going much further with story generation based on concept maps. If nothing else, it could be quite useful as a brainstorming tool.
blagh said...
There's the point, too, that newspapers rarely create their own sudoku puzzles and comics - they're sourced from creators who make them professionally. It might make sense for a newspaper to hire a programmer to create the games - they already have them for the website, after all, and there are some transferable skills - but less sense for journalists to begin learning to code for the purpose of attracting them to other articles.
Gail Carmichael said...
True enough. In the meantime, it would be nice for journalists to be able to put something together easily. It won't be as good as what a programmer could do, but it won't cost as much easier, so it makes a good starting point.
Post a Comment
Comments are moderated - please be patient while I approve yours. |
T.Könik; Automatically Creating Knowledge-Rich AI ... 26.04.2006
• FENS
• T.Könik; Automatically Creating Knowledge-Rich AI ... 26.04.2006
You are here
Faculty of Engineering and Natural Sciences
Automatically Creating Knowledge-Rich AI Systems Using Relational Machine Learning
Tolga O. Könik
Center for the Study of Language and Information, Stanford University
Developing AI systems that function autonomously and intelligently in complex environments is a difficult process that requires substantial programming expertise and development time. In this talk, I present a machine learning framework that automates this process. We expect that the target AI systems are capable of processing complex knowledge structures, which most mainstream machine learning techniques fail to represent properly. Fortunately, recent machine learning research usually treated under the umbrella headings such as relational learning or inductive logic programming (ILP) provides a framework that can address this issue. ILP combines complex knowledge structures, logical reasoning and machine learning using a first order logical representation.
In this talk, I present a framework for learning by observation that automatically creates AI programs from behaviors of experts. I describe how an ILP approach that I call “relational learning by observation” addresses some of the most critical challenges of learning by observation problem such as that the expert’s mental reasoning is not directly available to the learner.
I also describe another framework for AI program creation, where a human expert specifies abstract scenarios describing desired behavior of the target AI system using a diagrammatic storyboard-like representation. This approach is an example of a new paradigm for programming AI systems where the expert/programmer transfers his/her knowledge to the AI system using a graphical interface. Although this approach uses the same underlying learning system with our first approach, its graphical interface provides new mechanisms for the expert to communicate his/her reasoning. Moreover, it is a more interactive approach where a previously learned agent program helps the expert in specifying the scenarios. The agent program gives immediate feedback on how it would react to the specified situations, helping the expert to generate more relevant behavior data.
Tolga O. Könik graduated from Bogazici University with a B.S. degree in Electrical Engineering and a B.S. degree in Mathematics in 1997. He earned his M.S. degree in Systems and Control Engineering from the same institution. In 2001, he received an M.S. degree in Computer Science from University of Michigan. He is expected to receive his Ph.D. degree in Computer Science from the same institution in July 2006. His dissertation work involves learning by observation in cognitive agent. Since August 2005, he is working as a Research Scientists at the Center for the Study of Language and Information in Stanford University. He is also associated with the Institute for the Study of Learning and Expertise. His most recent work involves studying how knowledge can be transferred between different machine learning tasks. His current research interests include Machine Learning, Relational Machine Learning, Inductive Logic Programming, General Cognitive Agent Architectures, Learning in Cognitive Agents, Common Sense Reasoning, and Qualitative Reasoning.
April 26, 2006, 13:40, FENS 2019 |
Living longer through dietary control
Peter Jaret, author of many health-related books, observes that certain groups of people around the world enjoy exceptionally long lives.
According to him, Pacific Islanders have an average life expectancy of more than 81 years, compared to 78 years in the United States and a worldwide average of 67 years.
“What makes these groups so fortunate? Evidence suggests that diet is one of the important contributors to longevity and healthy living,’’ he observes further.
He argues that a healthy diet is one that helps to maintain or improve general health by providing the body with essential nutrition.
Corroborating this viewpoint, a nutritionist, Mrs Folasade Olatana, explained that eating other foods such as nuts regularly could reduces the risk of contracting major chronic diseases, including heart disease and diabetes that frequently resulted in deaths.
“Those that eat nuts actually lived longer. Studies show that nuts help to lower cholesterol, improved arterial function and blood sugar levels.
“Daily nut consumers have fewer deaths from cancer, heart disease and respiratory disease, even after controlling other lifestyle factors.
“Nut consumers live significantly longer whether they are older or younger, fat or slim; diets enriched with nuts do not affect body weight, body mass index or waist circumference, Olatana, a consultant with Lagos University Teaching Hospital, Idi-Araba, said.
In addition, Ms Yemisi Olowookere, a dietician with Garki Hospital, Abuja, observed that cultivating the habit of natural spices instead of processed seasonings would enhance longevity and healthy living.
According to her, natural spices, such as ginger and garlic, contain vital minerals and vitamins that improved healthy lives.
“People don’t really know the importance of taking garlic and ginger; they look ordinary but are significant in making our bodies healthy.
“Instead of using the processed or artificial seasoning sold in the market for food, one can add ginger and garlic to improve our health,’’ Olowookere said.
She also said that garlic and ginger were two herbs that possessed therapeutic and health benefits.
“Both of these herbs have been studied for their effectiveness in fighting infections, preventing cancer, reducing inflammation and various other applications.
“Garlic is known to have antifungal, antiviral and antibacterial properties and both garlic and ginger are thought to have anti-inflammatory properties.
“Ginger is sometimes used to treat arthritis, a disease characterised by inflammation. When ginger is taken in long term, it has sugar reducing effect for those that have diabetes,’’ Olowookere said.
She observed that although garlic could have a strong smell, its efficacy was more beneficial than its smell.
In her opinion, Hajiya Jummai Abdul, a nutritionist at Wuse General Hospital, Abuja, stressed that regular intake of yoghurt could also be helpful in the treatment of various diseases and reduce rate of deaths among young persons.
“Yoghurt prevents heart diseases and lowers the risks of many ailments, including colon cancer; one can enjoy it plain, flavoured or mixed with fruit or fruit syrups.
“It is a great source of protein, calcium, vitamin A and vitamin B12; all these nutrients are important for bone health,’’ she said.
She, nonetheless, advised that if anybody is allergic to milk, such person should avoid taking yoghurt because it contains milk proteins.
Abdul explained that regular intake of yoghurt would promote the normal growth and developments of bones in children by providing nutrients that maintain bone solidity and strength throughout life.
“Women who suffer from gastrointestinal conditions such as lactose intolerance, constipation, inflammatory bowel disease, among others, may find relief through the consumption of yoghurts containing active cultures, she said.
“Yoghurt is a great source of calcium which is especially important for pregnant women whose calcium reserves are used by their growing baby.
“Children can consume all kinds of yoghurt and enjoy its benefits as a source of protein, calcium and high phosphorus,’’ she said.
She added that yoghurt contained ingredients that could stabilise a woman’s body system and provide healthy living.
She also explained that an essential mineral in yoghurt known as zinc could boost fertility in men.
For effective dietary control, Dr Kingsley Umeh, a private medical practitioner in Abuja, warned against inclusion of processed foods in daily diets as they might result in piles.
He said adequate water intake, consumption of healthy meals and maintaining a healthy lifestyle were keys to achieving long live.
“Most people do not get enough fibre in their diet and they do not even eat enough fresh vegetables and fruits,’’ he observed.
Umeh, therefore, insisted that taking the time to fill one’s plate with lean proteins, vegetables and other food rich in fibre, as well as eating moderately, will help people to live longer in good health.
Leave a Reply
|
Psychology Wiki
Familial advanced sleep phase syndrome
34,203pages on
this wiki
Add New Page
Talk0 Share
Familial advanced sleep phase syndrome (FASPS) is a form of the sleep disorder, advanced sleep phase disorder which has been shown to have an inherited familial component.
In 1999, Louis Ptáček's research group at the University of California, San Francisco reported findings of a human circadian rhythm disorder showing a familial tendency. The disorder was characterized by a lifelong pattern of sleep onset around 7:30 p.m. and offset around 4:30 a.m. Among three lineages, 29 people were identified as affected with this familial advanced sleep-phase disorder (FASPD), and 46 were considered unaffected. The pedigrees demonstrated FASPD to be a highly penetrant, autosomal dominant trait.[1]
Two years after reporting the finding of FASPD, Ptáček's and Fu's groups published results of genetic sequencing analysis on a family with FASPD. They genetically mapped the FASPD locus to chromosome 2q where very little human genome sequence was then available. Thus, they identified and sequenced all the genes in the critical interval. One of these was Period2 (Per2). Sequencing of the hPer2 gene revealed a serine-to-glycine point mutation in the CKI binding domain of the hPER2 protein that resulted in hypophosphorylation of Per2 in vitro.[2]
In 2005, Fu's and Ptáček's labs reported discovery of a different mutation causing FASPD. This time, CKIδ was implicated, demonstrating an A-to-G missense mutation that resulted in a threonine-to-alanine alteration in the protein.[3] The evidence for both of these reported causes of FASPD is strengthened by the absence of said mutations in all tested control subjects and by demonstration of functional consequences of the respective mutations in vitro. Fruit flies and mice engineered to carry the human mutation also demonstrated abnormal circadian phenotypes although the mutant flies had a long circadian period while the mutant mice had a shorter period.[2][3] The differences between flies and mammals that account for this difference are not known. Most recently, Ptáček and Fu reported additional studies of the human Per2 S662G mutation and generation of mice carrying the human mutation. These mice had a circadian period almost 2 hours shorter than wild-type animals. Genetic dosage studies of CKIδ on the Per2 S662G mutation revealed that CKIδ is having opposite effects on Per2 levels depending on the sites on Per2 that CKIδ is phosphorylating.[4]
See alsoEdit
1. Jones, Christopher R., Scott S. Campbell, Stephanie E. Zone, et al. (September 1999). Familial advanced sleep-phase syndrome: A short-period circadian rhythm variant in humans. Nature Medicine 5 (9): 1062–1065.
2. 2.0 2.1 Toh, Kong L., Christopher R. Jones, Yan He, et al. (9 February 2001). An hPer2 phosphorylation site mutation in familial advanced sleep phase syndrome. Science 291 (5506): 1040–1043.
3. 3.0 3.1 Xu, Ying, Quasar S. Padiath, Robert E. Shapiro, et al. (31 March 2005). Functional consequences of a CKIδ mutation causing familial advanced sleep phase syndrome. Nature 434 (7033): 640–644.
4. Xu, Ying, Kong L. Toh, Christopher R. Jones, et al. (12 January 2007). Modeling of a human circadian mutation yields insights into clock regulation by PER2. Cell 128 (1): 59–70.
Ad blocker interference detected!
|
< The Two Babylons
Close this Page to access the Main Index
The Two Babylons
By Alexander Hislop
Chapter VI
Section II
Priests, Monks, and Nuns
* Revelation 17:5. The Rev. M. H. Seymour shows that in 1836 the whole number of births in Rome was 4373, while of these no fewer than 3160 were foundlings! What enormous profligacy does this reveal!--"Moral Results of the Romish System," in Evenings with Romanists.
Out of a thousand facts of a similar kind, let one only be adduced, vouched for by the distinguished Roman Catholic historian De Thou. When Pope Paul V meditated the suppression of the licensed brothels in the "Holy City," the Roman Senate petitioned against his carrying his design into effect, on the ground that the existence of such places was the only means of hindering the priests from seducing their wives and daughters (and sons)!!
* It has been already shown that among the Chaldeans the one term "Zero" signified at once "a circle" and "the seed." "Suro," "the seed," in India, as we have seen, was the sun-divinity incarnate. When that seed was represented in human form, to identify him with the sun, he was represented with the circle, the well known emblem of the sun's annual course, on some part of his person. Thus our own god Thor was represented with a blazing circle on his breast. (WILSON'S Parsi Religion) In Persia and Assyria the circle was represented sometimes on the breast, sometimes round the waist, and sometimes in the hand of the sun-divinity. (BRYANT and LAYARD'S Nineveh and Babylon) In India it is represented at the tip of the finger. (MOOR'S Pantheon, "Vishnu") Hence the circle became the emblem of Tammuz born again, or "the seed." The circular tonsure of Bacchus was doubtless intended to point him out as "Zero," or "the seed," the grand deliverer. And the circle of light around the head of the so-called pictures of Christ was evidently just a different form of the very same thing, and borrowed from the very same source. The ceremony of tonsure, says Maurice, referring to the practice of that ceremony in India, "was an old practice of the priests of Mithra, who in their tonsures imitated the solar disk." (Antiquities) As the sun-god was the great lamented god, and had his hair cut in a circular form, and the priests who lamented him had their hair cut in a similar manner, so in different countries those who lamented the dead and cut off their hair in honor of them, cut it in a circular form. There were traces of that in Greece, as appears from the Electra of Sophocles; and Herodotus particularly refers to it as practiced among the Scythians when giving an account of a royal funeral among that people. "The body," says he, "is enclosed in wax. They then place it on a carriage, and remove it to another district, where the persons who receive it, like the Royal Scythians, cut off a part of their ear, shave their heads in a circular form," &c. (Hist.) Now, while the Pope, as the grand representative of the false Messiah, received the circular tonsure himself, so all his priests to identify them with the same system are required to submit to the same circular tonsure, to mark them in their measure and their own sphere as representatives of that same false Messiah.
* There are some, and Protestants, too, who begin to speak of what they call the benefits of monasteries in rude times, as if they were hurtful only when they fall into "decrepitude and corruption"! Enforced celibacy, which lies at the foundation of the monastic system, is of the very essence of the Apostacy, which is divinely characterised as the "Mystery of Iniquity." Let such Protestants read 1 Timothy 4:1-3, and surely they will never speak more of the abominations of the monasteries as coming only from their "decrepitude"!
* Mamacona, "Mother Priestess," is almost pure Hebrew, being derived from Am a "mother," and Cohn, "a priest," only with the feminine termination. Our own Mamma, as well as that of Peru, is just the Hebrew Am reduplicated. It is singular that the usual style and title of the Lady Abbess in Ireland is the "Reverend Mother." The term Nun itself is a Chaldean word. Ninus, the son in Chaldee is either Nin or Non. Now, the feminine of Non, a "son," is Nonna, a "daughter," which is just the Popish canonical name for a "Nun," and Nonnus, in like manner, was in early times the designation for a monk in the East. (GIESELER)
Next Page |
In terms of obesity diagnosis, the history, physical examination, and laboratory evaluation of overweight and obese patients are directed toward three goals: first, to identify secondary causes of obesity; second, to identify comorbid conditions; and third, to establish the patient’s dietary and activity habits.
Height and weight measurements in the office are used to classify patients as overweight or obese according to BMI criteria; however, these criteria may not apply to patients who have gained weight as the result of increased muscle mass from intensive exercise.
Evaluation of abdominal obesity requires the use of a tape measure. A waist circumference (obtained at the level of the superior iliac crest) greater than 40 inches (102 cm) in a man or greater than 35 inches (88 cm) in a woman is considered abnormal.
Specific physical findings that might indicate secondary causes of obesity include pretibial edema and delayed tendon reflexes (hypothyroidism), purple striae, supraclavicular fat pad enlargement, and muscle weakness (Cushing syndrome). Other aspects of the clinical evaluation focus on comorbid conditions.
A number of the symptoms associated with diseases that can cause or contribute to unwanted weight gain, such as hypothyroidism or Cushing disease, occur frequently in overweight patients. These include fatigue, aches, cold intolerance, constipation, poor exercise tolerance, central obesity, loss of libido, and depression. Deciding when to screen a patient for secondary causes of obesity, therefore, can be a challenge for the practitioner.
Establishing a pattern of weight gain may be helpful. A patient with a lifelong history of being heavy and a stable adult weight is unlikely to have a secondary cause of obesity. A sudden or rapid weight gain over a few months or years, however, especially when accompanied by onset of comorbid conditions, may correspond to the prescription of medications that contribute to excess weight gain (especially steroids and newer antipsychotics) or indicate onset of an illness that requires further evaluation.
The history should include questions about diseases for which overweight and obese patients are at higher risk, including hypertension, impaired glucose tolerance or diabetes, hyperlipidemia, heart disease, pulmonary disease, and sleep apnea. These conditions may cause minimal or no symptoms, and therefore may be present for months or years before a diagnosis is made. Sleep apnea in particular is a common cause of fatigue and poor concentration or work performance in obese patients; these symptoms are often mistakenly ascribed to an abnormally functioning thyroid gland (despite normal results on thyroid function tests) or a so-called altered metabolism. This diagnosis may be missed unless the clinician specifically asks about characteristic symptoms: restless sleep at night, snoring or observed apnea, fatigue or headache upon awakening and during the daytime, and spontaneous daytime sleep when inactive or while driving.
In severely obese patients, increasing peripheral edema, orthopnea, and worsening exercise tolerance may be symptoms of congestive heart failure or pulmonary hypertension and right-sided heart failure from severe sleep apnea. New-onset headaches may indicate normal-pressure hydrocephalus. Gastroesophageal reflux disease usually results in heartburn or an acid taste in the throat. During a period of weight gain, women may develop irregular periods or symptoms of androgen excess. Although commonly diagnosed as polycystic ovary syndrome (PCOS), these findings differ from classic PCOS in that they occur after menarche and are not usually associated with polycystic ovaries.
Finally, inquiring about past and present dietary and activity habits is important for subsequent discussions of medical and surgical management. Most overweight and obese patients will have made numerous attempts to lose weight, through diets, exercise regimens, or commercial weight-loss programs. Because of unrealistic expectations and the inevitable weight regain that occurs, patients are often discouraged or leery of new advice. |
• SumoMe
Since its inception, Google has always had the motto of producing software and applications that serve the Internet to its full potential and allow its users to gain the most out of it. Market competition has always been the driving force behind its pursuits and the result has constantly been towards the better. In compliance with the platform it serves, the ever- changing Internet, Google has kept up and done better each time the need arose. The latest dish out of its kitchen is the all-new web browser, CHROME, which hopes to put an end to many problems of the Internet. Besides providing the basic functionalities of a browser, it also adds certain features in each area to generate a standard paradigm of web use and development. For those curious about Chrome, this article will provide you with some information of the technology behind it and some of its features.
The problem with the existing browsers is that they are single threaded, in the sense that at one time only one operation uses the resources. The limitation is overcome by introducing separate processes for each of the tabs, each using different copy of memory and data structure. The dynamic appearance of the web is rendered by JavaScript, which has also been made multi-threaded for different processes in this version. So the web continues to function in one tab even if others are found waiting for action, keeping the continuity preserved. The older design allocated same space to all tabs. The closing of one tab left back the allocated space being further unusable. Here each closing clears all associated memory resources and frees up space for other applications. Chrome also provides its own task manager which can be used for inspecting the resource usage by each application individually, including the plug-ins. Thus, the unwanted or the overwhelming ones can be aborted whenever needed. The need for a Beta version is postponed to late future as web pages are constantly being tested by the Google databases for bugs and errors. It happens each time a new Chrome build is added to the Internet.
Now let us move on to the issue of speed and the secret behind it. The open source rendering engine used to develop Chrome is called the WebKit, which is favorable because of its simplicity and easy-to-understand and develop coding patterns. The greatest innovation added is a Virtual Machine (VM) for the JavaScript by the V8 team in Denmark. The Virtual Machine generates a machine code that translates the source code into computer understandable binary digits only once. All later executions use different parameters to this compiled copy and get the desired output, reducing the overhead at each application execution, thereby rapidly increasing browser speed. It also allows the same source code to be applied on all different platforms and OS as the VM takes care of program transitions.
Although JavaScript is classless, every object being produced at the run-time of the applications share some common properties to other existing ones, and is gradually clubbed into same hidden classes that behave and respond in the same manner. So an overall uniformity is maintained among all the different activities on the net. In Chrome, Conservative Garbage Collection takes place that places all pointers in one place and all data in another place on the computer memory. So every time the need to free resources arises, the system has to look into a small set of pointers at a specified place only, and delete the unwanted references only. It makes the look-ups faster and more efficient. Also the risk of removing components that may be used later is minimized.
Chrome also provides a better search facility. Since each tab has its own URL bar, called the omnibox here, isolated actions can occur. The address bar provides the list of matching options, the most visited sites, and a suggestion of some sites based on the letters typed at an instant. Auto-completion options are relaxed by allowing only filling in previously typed words. So the whole web page URL is not filled, only the website’s names are added. Opening up of the Chrome browser provides us the most visited sites to choose form or enter a new URL, as desired. Privileges exist to make certain sessions secretive and they are not remembered by the browser after such windows are closed. The pop-ups are allowed to live only in the tab where they were opened but can be dragged out to any other window or tab. So no unwanted blocking too.
Another important feature is that it makes the best possible efforts to reduce the risks of malware and phishing by making all process read-only with respect to hard-drive contents. No changes can be made into the user’s data by malicious programs that are abundantly found on the net. Only two levels of permission is permissible:- high and none. Data can only be read from none to high level and written from high to none level. The lower level of security, none, can only perform action on the request of the higher end, thus removing unauthorized access to data. But plug-ins do get additional access permissions and can be dangerous. But the risk is still less daunting as compared to the multitudinal viruses sprawling across the Internet. Even if a plug-in does get corrupted, it can be closed own without affecting other processes as they are independent processes in Chrome. In addition to these static methods Chrome perpetually downloads the list of sites potentially dangerous for malware and phishing and stores the information. If a visited website falls into such a list, the user is notifies of the risks and prompted for proceed instructions. This guarantees complete control of web action on the part of the end-users.
To include compatibility problems Chrome provides a set of Gears that works behind the screen at the developers to end to provide the functionalities. Each new developer can study and adapt to the existing gear configuration of the browser to make the new product compatible and more efficient.
Arindham Chakroborty
[Image Source:]
Share : Share on FacebookTweet about this on Twitter
Read previous post:
The Compulsive Confessor
“Here I sit, in the almost-morning of December 31, not sleepy, not quite wide awake. The end of 2004. Year... |
Placing the Bell Muffle
Half muffled ringing is a special way of modifying the normal bell sound for sad and solemn occasions such as a funeral or the death of a statesman. Some Churches ring half muffled during lent. It is usual to muffle the backstroke, but a muffled handstroke may lend itself to hearing the 'music' in a quarter peal for a funeral. To ring the more usual open handstroke and muffled backstroke place the muffler on the ball of the clapper on the face opposite to where the rope rises from the ground pulley. As shown in the diagram.
Fully muffled ringing is reserved for the death of a reigning monarch. Both strokes are muffled, although optionally the tenor may be muffled at handstroke and left open at backstroke.
For safety reasons the bells must all be down before attaching or removing the muffles, even though this means climbing into the bell pit and working from under the bell.
Muffler Fitting
This shows the old type of universal muffle, these fits on any bell. They are held in position by a buckled strap in the groove between the ball and flight. A boot lace holds the top of the muff to the shaft. These can be very difficult to secure tightly enough to prevent them rotating and becoming ineffective.
Modern muffles are tailored to the size of the clapper ball and therefore only fit one bell. Velcro type material hold them securely in place. The manufacturer advises the use of non-slip paint on the ball area of the clapper.
SFS 4/2014 |
What would you like to do?
Is the shoulder distal or proximal to elbow?
already exists.
Would you like to merge this question into it?
already exists as an alternate of this question.
exists and is an alternate of .
It's proximal.
12 people found this useful
Thanks for the feedback!
Is the hand located proximal or distal to the elbow?
The hand is distal, because your hand is drawing away from your body. Proximal means closer to the body. Example: The elbow is distal to the chest. The elbow is proxim
Are the alveoli distal or proximal to the bronchi?
The alveoli and bronchi are both parts of the respiratory tree. The main parts of this structure, in the order of air passing through during inspiration are: trachea, bronchi,
Is the ankle proximal or distal to the foot?
Proximal, anything closer to midsection in proximal (closer) than distal (distant).
Is the foot distal or proximal to the knee?
The foot is distal to the knee Yes, because the origin of the leg is at the hip, and the thigh is closer to the hip than the foot.
Difference between distal and proximal?
These are words indicating whether something is nearer or further to a given point of reference. It is commonly used to pinpoint body parts in relation to the body midline. Fo
Examples of proximal and distal?
the thigh is proximal(closer) to the foot; moving proximally from the wrist brings you close brings you to the elbox. the fingers are distal(farther from the attachment point) |
Weather and Climate questions and answer revision notes
Questions and answers to help revision.
HideShow resource information
• Created by: Helena26
• Created on: 15-11-13 14:56
Preview of Weather and Climate questions and answer revision notes
First 359 words of the document:
Weather and Climate Revision Answers
What is the definition of `weather'?
The state of the atmosphere on a local scale over a short period of time.
What is the definition of climate?
The average atmospheric conditions over a larger time scale and area. It is often defined as
the average weather conditions for a 30 year period.
List the layers of the structure?
Troposphere, Stratosphere, Mesosphere and Thermosphere.
Explain each one.
The zone closest to the Earth and where most weathering takes place.
Exhibits the highest temperatures as radiation from the sun warms the Earth's
surface which then warms the air directly above it by the process of
convection, conduction and radiation.
However, this affect decreases rapidly with distance away from the surface as
air temperature drops by 6.4°C for every 1,000m (1km) gained in height.
Wind speeds also increase with height as frictional drag with the Earth's
surface is reduced.
This is the most unstable layer containing most water vapour.
The end / top of the troposphere is called the tropopause, an isothermal layer
where temperature remains constant as altitude increases. It marks the limit
of the zone of weather and climate.
Characterised by a steady increase in temperature (a shallower change than
that of the troposphere) called temperature inversion.
This is as a result of the absorption of solar radiation by the ozone layer at
2530km high.
The ozone layer absorbs enough ultraviolet (UV) radiation to make it safe for
humans otherwise
The atmosphere is noticeably thinner in this zone as pressure decreases with
height and there is a lack of vapour.
Wind speeds also keep increasing towards the stratopause, another
isothermal layer.
Temperature decreases rapidly (similarly to in the troposphere) but this time to
much cooler temperatures of 90°C.
Here there is no water vapour or dust to absorb radiation.
Very strong winds of 3,000 kmph.
Culminates in another isothermal layer called the mesopause.
Other pages in this set
Page 2
Preview of page 2
Here's a taster:
Named due to the increase in temperature resulting from the absorption of UV
radiation of atomic oxygen found at this altitude.
Why does temperature decline with altitude throughout the troposphere?
At lower altitudes there is more air molecules meaning there is more friction between them which
leads to higher temperatures. However at higher altitudes where there are less air molecules there
is less friction and therefore temperatures are much lower.…read more
Page 3
Preview of page 3
Here's a taster:
Vertical motion not only transfers heat from the areas of positive heat budget by cooling as
the air masses rise, but is linked to the horizontal movements at higher altitudes which help
transfer warm air towards the Poles.
Global wind systems transfer heat and moisture from the sub-tropics towards the higher
The south-westerly winds for example keep Britain mild in winter because they are warm
wet winds coming up from an area of energy surplus.
Northerly winds however keep Britain cooler.…read more
Page 4
Preview of page 4
Here's a taster:
It sweeps the Antarctic Continent at
around 4km deep and then moves into the major ocean basins. These motions are reciprocated by
less salty and dense surface currents which move north towards the Poles from the Indian and Pacific
Oceans.…read more
Page 5
Preview of page 5
Here's a taster:
Today we refer to the Hadley cells either side of the equation to describe the intense
insolation which causes warm air to rise by convention.
The low pressure created drags the trade winds in towards the equator where they are
forced to rise.
The zone where they meet is called the intertropical convergence zone (ITCZ).
This is a kind of meteorological equator which shifts north and south during the year.…read more
Page 6
Preview of page 6
Here's a taster:
Polar Continental (easterly) gives very cold temperatures in the Winter <0°C as
originated over the cold land mass of Eastern Europe, but warms slightly over the North Sea
to become unstable in the lower layers although the North Sea is not wide enough for it to
become a warm air mass. Unstable lower layers bring heavy snow in Eastern Britain. Wind
chill is also high but if this air stream occurs in summer it brings warm conditions and is
more stable therefore causing dry conditions.…read more
Page 7
Preview of page 7
Here's a taster:
Britain affected by frontal rainfall which occurs when warmer air is pushed up over a
wedge of cooler air where two air masses meet in a frontal system.
Most common direction from which the wind blows is the southwest, but this is
variable with long spells of easterly or northeasterly winds quite frequent in winter.…read more
Page 8
Preview of page 8
Here's a taster:
What is an anticyclone?
This is where air is descending leading to high pressure systems; here any water vapour present in
the atmosphere is evaporated leading to little cloud cover and precipitation. Winds are light as
isobars are far apart and the pressure gradient is gentle leading to lights winds that blow clockwise
out from the centre of the high-pressure area. The passages of anti-cyclones tend to be much slower
than with depressions.
Describe summer anticyclones.…read more
Page 9
Preview of page 9
Here's a taster:
Monsoon Climate has 3 different seasons:
Oct-Mar: the only dry season with temperatures of 21-25°C, little rainfall and winds blowing
from the north east called the `rising Monsoon winds' - land to sea breezes.
Mar-June: this is a short season with extremely hot temperatures of 31°C, there is cloudless
sky with no rain and dry winds.
June-Sept: winds from the sea bring heavy rain where rivers are filled. Temperatures are a
little cooler.…read more
Page 10
Preview of page 10
Here's a taster:
As the strong winds sweep over the sea they increase the rate of evaporation.
As this moist air keeps rising it cools, condenses and releases latent heat to form
cloud and heavy rainfall.
This warms the atmosphere further and increases instability.
A central eye starts to develop which is 3050 km in diameter and has subsiding air,
clear skies and high temperatures.
The extreme low pressure at the eye causes air to be sucked towards the centre as
powerful winds.…read more
Mr A Gibson
These are really valuable and will be great for you to prepare for exam questions on this topic (regardless of exam board). These can serve as notes too for your folder.
Similar Geography resources:
See all Geography resources »See all resources » |
Loading presentation...
Present Remotely
Send the link below via email or IM
Present to your audience
Start remote presentation
• Invited audience members will follow you as you navigate and present
• People invited to a presentation do not need a Prezi account
• This link expires 10 minutes after you close the presentation
• A maximum of 30 users can follow your presentation
• Learn more about this feature in our knowledge base article
Do you really want to delete this prezi?
Make your likes visible on Facebook?
You can change this under Settings & Account at any time.
No, thanks
SLIM analysis of Poetry
This method is used to examine the TECHNIQUES used by a poet in writing poems. It is useful in understanding the use of language in the writing of poems.
Michael Togher
on 16 July 2013
Comments (0)
Please log in to add your comment.
Report abuse
Transcript of SLIM analysis of Poetry
Poetry Technique
using the SLIMS method
How would you describe the poets use of words - vivid, striking, effective, colourless and predictable? Is the language appropriate to subject/theme? What effect does the language have?
Are there any striking examples of similes, metaphors, personifications or symbols in the poem? What is their effect?
How is the poem structured? Does it have stanzas with a regular number of lines, or any other interestingfeatures of structural design?
Does the poem have any significant sound features? Does the poem use onomatopoeia, alliteration, or assonance? Does the poem rhyme? What are the effects of these sound features to the poem as a whole?
Full transcript |
Printer Friendly
Fossils push back origin of land animals.
Fossils Push Back Origin of Land Animals
Paleontologists have discovered fossilized fragments of the oldest known land-adapted creatures: centipedes and tiny, spider-like arachnids dating back about 414 million years. The finds are forcing scientists to revise their thoughts about animal colonization of the continents, one of the most important steps in evolutionary history.
The fossils come from the city of Ludlow in Shropshire, England, where they were embedded within rocks from Earth's Silurian period, report Andrew J. Jeram of the Ulster Museum in Belfast, Paul A. Selden of the University of Manchester and Dianne Edwards of the University of Wales, who announced their discovery in the Nov. 3 SCIENCE. Prior to the Ludlow finds, the oldest known land animals dated to the early Devonian period, about 398 million years ago.
"This represents a very substantial step back in time," says William A. Shear, who studies the early evolution of land animals at Hampden-Sydney (Va.) College. "What this tells us is that we can look much farther back in the fossil record and expect to find more communities like this. We'll probably have to look much, much farther back to find the actual transitional forms [from which these land creatures evolved]."
Jeram and his colleagues say the age of the fossils suggests that the earliest land animals -- ancient arthropods -- emerged from the ocean soon after plants began spreading over the continents. Until now, paleontologists conjectured that animals lagged far behind plants in their adaptation to terrestrial life.
The researchers uncovered the fossils by dissolving Silurian rocks in hydrofluoric acid, which leaves behind exoskeleton fragments. Then they examined the animal parts under a microscope, attempting to decipher how the fragments fit together.
"It's a bit like doing a jigsaw puzzle -- or maybe a dozen jigsaw puzzles that have been thrown together -- without knowing what the picture looks like," Selden told SCIENCE NEWS.
The acid treatment unveiled pieces of legs, back plates and trunks from centipedes of unknown size and the body of a spider-like animal called a trigonotarbid arachnid. The trigonotarbid fossil measured 1.3 millimeters long, suggesting an animal about the size of a common flea. The most complex land plants of that era grew only a few millimeters tall and would have looked like an outdoor carpet covering the landscape, Shear says.
Because both trigonotarbids and centipedes were predatory animals, the researchers reason that early terrestrial communities must have included other arthropods that served as prey. The Ludlow remains did not offer clear evidence of such creatures, but Selden suggests the prey animals were small arthropods that munched on tiny, easily digestible bits of decayed plant material. This contrasts with the modern world, where animals at the lower end of the food chain subsist on live vegetation.
COPYRIGHT 1990 Science Service, Inc.
Article Details
Printer friendly Cite/link Email Feedback
Author:Monastersky, Richard
Publication:Science News
Date:Nov 10, 1990
Previous Article:Loosely packed spheres.
Next Article:Genetic trickery probes tropical parasites.
Related Articles
Traces of soft-bodied beasties.
Landing the earliest plants and animals.
Why bite the right of a trilobite?
Old pseudoscorpion had modern features.
Enigmas overturned by Chinese fossils.
Forgotten fossils reveal leggy legacy.
Jump-start for the vertebrates; new clues to how our ancestors got a head.
Early kin of vertebrates found in China.
Yolks of yore: oldest animals found.
Amphibious ancestors: vertebrates' transition to dry land took some fancy footwork.
|
The unit of replication is a publication. A publication is an ordered sequence of transaction entries. One database transaction can add data to zero or more publications. The data contributed to a publication by a transaction is appended to the publication at the time of commit. Because commits are serialized database wide, items in a publication have a well defined order.
Each transaction entry in a publication has a unique sequence number within the publication. Each subscriber of a publication has a level of synchronization, which is the serial number of the last transaction from the publication which this subscriber has processed.
Each publication has exactly one publisher and zero or more subscribers. Any multi-master merge replication schemes will be based on this notion, with data to be merged back into the original source regarded as a separate publication and the merge regarded as a process between publications.
In order to publish data for replication by others a server must have a unique name within the group of servers participating in the replication. This server name is assigned to the server in its virtuoso.ini file in the DBName setting.
To publish data the publishing server initializes a publication with the repl_publish function, where it names the publication and assigns a log file name for it. The server can then start adding transactions to the publication, which can happen either under application control or implicitly.
[Tip] Tip
See the repl_text function.
To subscribe to publications a server must also have a distinct DBName. It identifies the publishing server by associating a host name and port number to its logical name with the repl_server function. It can then call repl_subscribe() . Replication feeds from publisher are replayed by 'dba' user by default. The default can be changed (see repl_subscribe() function). for each of the publications it subscribes to. A publication is uniquely identified on the subscriber with the publishing server name and the publication name. Note that several servers in a network may publish like named publications and these will be logically distinct, having each their own distinct publisher.
A subscriber may or may not be connected to the publisher at any point in time. If a subscriber is connected to the publisher it may either be 'in sync' or syncing'. In the syncing state it is receiving transaction entries with numbers consecutive from its sync level up until the last committed serial number committed on the server.
At the start of the sync communication the subscriber indicates the level of the last successfully processed transaction in the publication. The sync exchange terminates when the subscriber reaches the last committed item on the publication. At this point the subscriber is said to be 'in sync'. The connection to the publisher is then maintained by default and is used to send sync information as it becomes available. This means that once an entry is appended to the publication by a committing a transaction it is sent to the 'in sync' subscribers without separate request.
The publisher can terminate the replication feed by unilateral decision. It will do it if the sending of the message times out for too long or if the queue of 'to be sent' replication records exceeds a settable threshold. This essentially happens with communication failures or if the subscriber continuously processes the feed at a speed lower than the feed production speed of the publisher. A disconnected subscriber can reconnect at will, in which case it enters the 'syncing' state and will receive transactions from the point where the feed was cut.
A subscriber can disconnect from the publisher at any time without ill effect.
A table
SERVER varchar,
ACCOUNT varchar,
NTH integer,
LEVEL integer,
IS_MANDATORY integer,
IS_UPDATEABLE integer,
SYNC_USER varchar,
P_MONTH integer,
P_DAY integer,
P_WDAY integer,
P_TIME time,
primary key (SERVER, ACCOUNT))
is used to store information about published accounts and accounts this server is subscribed to.
A table
RS_SERVER varchar,
RS_ACCOUNT varchar,
RS_SUBSCRIBER varchar not null,
RS_LEVEL integer NOT NULL,
RS_VALID integer NOT NULL,
is used to store subscribers' status (pushback accounts for updateable subscriptions are there too). Subscribers for an account are added to this table automatically on each request to sync an account from subscriber or manually from Admin UI.
SYS_REPL_SUBSCRIBERS.RS_VALID column is be used to designate subscribers whose replication account level is valid (lags not more than REPL_MAX_DELTA behind the publisher's level).
RS_VALID state of subscriber is checked and updated on every sync request from subscriber. If subscriber is found to be invalid all further sync requests from it are ignored. Such subscriber need to be reinitialized manually and marked as valid using Admin UI. |
Wednesday, April 1, 2009
A little money gets people to exercice in the long term
Sometimes, small incentives can bring great rewards. Think for example about the 10 cent toys some fast food outlets give to children in return of the loyalty of a whole family (and a lifetime of business thereafter). One could equate this to the little tryout that tips you into an addiction, providing great returns to the provider of the initial investment. But can we obtain such behavior on a more positive side?
Gary Charness and Uri Gneezy show that giving people a monetary incentive to attend a gym for a month will make them more likely to attend thereafter. They claim that a non-trivial incentive managed to create a good habit, but one can also view this incentive to be relatively small compared to the present value of future benefits from going to the gym.
It looks like context matters. People already were aware of the benefits, they just needed to overcome some fix costs to try and the incentive was sufficient. But the more interesting aspect of the study is that compared to a control group, those who received incentives for a sufficiently long time then kept going to the gym thereafter, thus a habit was created. But it looks like this habit was already underlying, waiting to awakened. Why would this habit be stronger with the monetary incentive?
The experiments were carried out in Chicago and San Diego. In particular in the second location, there is a culture centered around the gym: it is the place to be, the meeting place. There is something of an expectation in California that everybody respectable goes to the gym. What does this mean for this study? The peer pressure to go to the gym should not be different according to the receipt of an incentive or not. But does the feeling of guilt about not going to the gym become stronger if you used to be paid to go?
Mike Fladlien said...
Do you think that small incentives will get students to work more productively in class until it becomes a habit? I think students want to learn just like most people want to exercise, but they need a push. Dogbreath
Michelle Schaeffer said...
I think that with students, it's the fear of failure that ultimately pushes some to not be productive in and out of class. These are theoretically the ones that you want to give incentives to.
There is already a large network of incentives set up for students to do well, we call them grades. Unfortunately, some students don't value these grades at market value and the incentives lose their power.
The problem is the old saying, "C's get degrees" and it's true. Unless a student plans to go on to grad school, there is no incentive (other than personal satisfaction) of expending energy over and above the requirements of getting a C.
A monetary incentive already exists in some universities (automatic enrollment for scholarship money if over a certain GPA) but this is unavailable to those who aren't close to that GPA.
I don't know what the answer is to this but those are my thoughts. |
Saturday, September 22, 2012
This week we taught and practiced the General Response Protocols (GRP) for emergency situations. Each protocol has specific staff and student actions that are unique to each response - from the relatively mundane routine of a fire drill to the frightening realism of a hard lockdown.
We began with a conversation about classroom rules and staying safe in school. We established an understanding of new vocabulary--such as evacuation and procedure--whilst reviewing the handy dandy safety PowerPoint provided for us by the good folks at the "i love U guys" Foundation.
We were old pros when it came to following the procedure for a fire drill. Our timing was as good as Ricky, Fred and Ethel's when they had a dry run preceding the birth of Little Ricky on I Love Lucy. It went like clockwork - stop talking and listen for directions, get in line, exit the building and go to our assigned location. Piece of cake.
The Shelter-In was no problem either because basically we do nothing. We go about our business, unless of course our business involves leaving the building. That is one thing you cannot do during a shelter-in.
A hard lockdown is another story. My co-teacher Michelle chose a good day to stay home. I had to delicately finesse my way through the murky waters alone (upstream, without a paddle, at night...). This is scary stuff and I didn't want to frighten the little kindergarten children so I tried to find the proper tone, a mix of seriousness and adventure.
A hard lockdown implies that imminent danger is INSIDE the building and everyone needs to get to their safe place immediately! I told them this meant someone was in the building that shouldn't be there and to stay safe I needed to lock the doors (which I pretended to do as I talked with them to provide an unhurried, calm demonstration) and we all needed to go quietly to the large coat closet and hide.
We did.
There were a few nervous giggles as we stood there hiding before I announced, "The lockdown has been lifted" and we went back to the rug.
Once we were on the rug the questions started...
"Does that mean someone has a gun and wants to hurt us?"
"What happens if he gets in?"
"What if there is a fire in our safe place during the lockdown" (I thought, "what a wonderfully thought out, outrageous question!" and was fumbling for an answer when the little boy said, "Well, that'll never happen").
And then the tears..
"If they get us that means we'll never see our mommy's or daddy's again." (She started to cry, I started to cry.)
And finally the comic relief...
"Next time we practice a lockdown can we do it when I'm not here?"
You and me both kid, you and me both!
Cindy said...
We practiced tornado drills this week. In my class, we call them turtle drills. You have to get inside your shell and protect your neck. Turtles don't talk, so you can't either.
I tell me little ones why we are practicing, but I also tell them that I've never seen a tornado and they probably won't either. This is a just in case practice.
Gary said...
Cindy - I like the idea of turtle drills. It makes things very clear. Given the way things are going weather-wise we may be doing tornado drills in New York.
Greg Smedley said...
I'm actually envious of your specific drills. We don't have specific drills, we just have a "security dril" and we never know why. I wish I knew if the danger was outside or imminent inside so I could better protect my children. We are located in a neighborhood that is subject to much violence and gunfire is common so I wish we had better plans in place. When it comes to tornados, we are well versed because tornados are common in our area. In fact, we have a siren at our school and the location of our classroom means we hear the siren and have moved to our safe place before the principal calls for us to seek shelter.
Smedley's Smorgasboard of Kindergarten
Related Posts with Thumbnails |
Monday, December 12, 2011
Vesta is a Differentiated Planetoid
Once again, I suspect that rocky planets such as Earth and Mars are produced as ejects from Jupiter which does the heavy lifting in terms of accumulating the mass and clearing out the Solar System. Jupiter is close by the point of rotational instability and a mass the size of Earth rapidly acquired would be spun up and ejected back out.
I go further than that and propose Venus is a recent addition to the Solar System and that its scar on Jupiter is the Red Spot. However most of the planet making activity took place during the early years of the formation of the Solar system and we have a very good explanation for the formation of a solar system. The late arrival of Venus is plausibly caused by deliberate intervention as was the crustal shift that ended the Great Ice Age here on Earth.
Thus discovering a much smaller ejecta body such as Vesta is actually to be expected.
Dec 9, 2011: NASA's Dawn spacecraft spent the last four years voyaging to asteroid Vesta – and may have found a planet.
"We're seeing enormous mountains, valleys, hills, cliffs, troughs, ridges, craters of all sizes, and plains," says Chris Russell, Dawn principal investigator from UCLA. "Vestais not a simple ball of rock. This is a world with a rich geochemical history. It has quite a story to tell!"
Researchers believe this process also happened to Vesta.
Like Earth and other terrestrial planets, Vesta is differentiated into layers.
?deis ex machina? The ejecta model sends out a molten body of material that will obviously differentiate as it also cools down. Please note that the surface temperature of the rock on Venus is still close to the temperature at which it is molten as may be expected from a recent ejection event. This needs to be counteracted with a cometary bombardment that delivers methane and water and accelerates the cooling and recycling of the near crust.
It is my conjecture that once Earth is fully terraformed, our next task will be the terraforming of Venus ultimately providing Earth with a back up.
Vesta has so much in common with the terrestrial planets, should it be formally reclassified from "asteroid" to "dwarf planet"?
If anyone asks Russell, he knows how he would vote.
New NASA Dawn Visuals Show Vesta's 'Color Palette'
Image Advisory: 2011-375
More information about the Dawn mission is online at:
To follow the mission on Twitter, visit:
No comments:
There was an error in this gadget |
2011年12月16日 星期五
Now the Franco-German question 歐洲前途係於法德關係
Now the Franco-German question
Germany will have to learn leadership, and France followship. Both will find it a wrenching experience. The rules of the European game changed for ever with the reunification of Germany. It has taken the euro crisis to spell out the brutal implications.
人們不得不對德國總理安格拉•默克爾(Angela Merkel)感到些許同情。默克爾因置身事外的態度和強勢的領導作風而備受指責。一分鐘前,人們還指責她在歐元陷入水深火熱之際袖手旁觀,一分鐘後,又指責她為紓困歐元製定日耳曼式的苛刻條款。這些批評聲音不斷提醒著我們,對歐洲而言,德國總是太大了。
The new German question asks whether Europe – whether it is the European Union or a more closely integrated eurozone – can find a new equilibrium now that Germany is so visibly the preponderant power. This in turn marks the return of the Franco-German question. Berlin is assuming the role of leader with a mixture of hesitancy and tetchiness. Paris will struggle mightily to accept the place of follower.
The choreography is calculated to conceal this redistribution of power. The euro crisis has been cast as the Angela and Nicolas show – the German and French leaders smiling for the cameras at the Élysée; a jointly signed missive spelling out a euro rescue plan.
這種安排的宗旨是揭示權力的重新分配。歐元危機已經成為默克爾和法國總統尼古拉•薩科齊(Nicolas Sarkozy)兩人的秀場——兩國領導人在愛麗舍宮面對著鏡頭微笑,共同簽署歐元紓困計劃。
This is called keeping up appearances. For France, the survival of the euro is existential. Never mind the initial, enormous economic shock that would follow its failure. The break-up of monetary union would most likely see France slide into the continent's second division . Europe is the engine room of French power. Without it there would be nothing left of its global pretensions.
Mr Sarkozy, of course, has been fighting his corner – pressing for a Gaullist, intergovernmental arrangement rather than a leap to fiscal federalism. France has been attuned to the danger of Berlin's habit of elevating the avoidance of moral hazard above restoring confidence in financial markets .
In the end, however, Berlin has prevailed. As Charles Grant of the Centre for European Reform has observed, the proposals for a stability union presented to the Brussels summit were essentially written in Germany, even if the odd page was edited in Paris.
然而,德國最終佔據了上風。正如歐洲改革中心(Centre for European Reform)的查爾斯•格蘭特(Charles Grant)所言,提交至布魯塞爾峰會的建立“穩定聯盟”的提案基本上是德國編寫的,儘管有幾頁法國進行了校訂。
Assuming (perhaps foolishly) agreement at the summit, the present approach should secure a second chance for the euro: the more so if it provides cover for decisive intervention in the markets by the European Central Bank. But for the very reason it has been written in Germany, the strategy fails to offer a sustainable long-term answer.
The economic argument at the heart of all this never really changes. Instead it returns again and again to the disagreement that surfaced nearly 70 years ago among policymakers at Bretton Woods.
所有這些問題的核心經濟論點從未真正改變過。實際上,它一次又一次地回到近70年前布雷頓森林(Bretton Woods)會議上政策制定者的分歧之上。
In 1944 John Maynard Keynes argued forcefully that the planned new exchange rate regime required symmetrical obligations on creditor and debtor countries to deal with any imbalances. If the system was to endure, austerity on one side had to be balanced by growth on the other.
1944年,約翰•梅納德•凱恩斯(John Maynard Keynes)極力主張,擬議中的新匯率體制需要債權國和債務國對半分擔解決一切失衡問題的責任。這種體制要想持續下去,一方的緊縮必須由另一方的增長來平衡。
Keynes lost the argument then, but governments have been returning to it ever since. During the 1980s it was at the heart of economic discord between the US on one side and Germany and Japan on the other. It runs through today's trade tensions between Washington and Beijing.
The big irony, though, is that this very same debate was present at the creation of the single currency. François Mitterrand's effort at the start of the 1980s to pursue an expansionary economic policy ended in humiliation when Helmut Kohl made fiscal rigour the price of the franc's continued place in the European exchange rate system. France resolved never again.
然而,最為諷刺的是,在單一貨幣體系創立的問題上,也有過一模一樣的辯論。上世紀80年代初,法國前總統弗朗索瓦•密特朗(François Mitterrand)努力推行擴張性的經濟政策,這一努力最終以蒙羞的失敗告終,因為德國前總理赫爾穆特•科爾(Helmut Kohl)要求法國推行緊縮的財政政策,以換取法郎保留在歐洲匯率體系中的位置。法國從此一蹶不振。
The outcome was the “franc fort” policy and a push to share economic decision-making between Germany and France. Once the D-Mark had been subsumed in a single currency, the austerity versus growth argument would finally be settled. That was the theory .
Germany is now within reach of the political integration it sought as a counterpart to monetary union when the euro was established. The danger is an assumption in Berlin that the new structure can be built to an entirely German design.
Ms Merkel's stability union will endure only if it acknowledges that Keynes was more than half-right. Any supranational scheme, whether enshrined in treaty or otherwise, that condemns much of Europe to indefinite austerity will not survive the realities of national politics.
If German leadership is to avoid being oppressive, it must recognise that fiscal union cannot be a one-sided affair. It was encouraging this week to hear Ms Merkel talk about the competitiveness problems in the weaker eurozone economies. It would be more so were she to talk about formulating a strategy for growth.
For its part, France must begin to reimagine the political geography of Europe. The Franco-German relationship will always be a pivotal one, but it is now unequivocally unequal. Paris needs friends beyond Berlin – in Warsaw, Rome and Madrid. If Britain's Tory party were ever to leave behind its European nightmare, there would also be a case to revive the old entente.
Radoslaw Sikorski, Poland's foreign minister, recently told an audience in Berlin that the big threat to Europe came not from German power but from German inactivity. Given the two countries' history, that was a pretty brave thing to say. Few would dispute that the survival of the euro now rests with German leadership. There must be more to that leadership, though, than the promise of austerity.
波蘭外長拉多斯瓦夫•西科爾斯基(Radoslaw Sikorski)最近在柏林發表了這樣的言論:歐洲面臨的一大威脅,不是德國的權力,而是德國的不作為。考慮到兩國的歷史淵源,發表這番言論需要相當的勇氣。歐元的存亡如今有賴於德國的領導,這一點基本毋庸置疑。然而,德國的領導必須帶來更多的東西,而不只是緊縮承諾。 |
Born: September 29, 1547, in Alcala de Henares, Spain
Died: April 23, 1616, in Madrid, Spain
Miguel de Cervantes was born the fourth son to Leonor de Cortinas and Rodrigo de Cervantes. The latter was a deaf surgeon, with a large family to support, and limited means to do so. Cervantes' first poems in appreciation of Spain's Queen Elizabeth of Valois in 1568 were published while he was still a student. However he left off writing while struggling to make a living as a chamberlain to Cardinal Giulio Acquaviva, whom he accompanied to Italy. In Naples, he joined the Spanish regiment for the 1570 naval battle against the Turks in Lepanto. Cervantes was shot twice to the chest and one in the hand while on board the ship La Marquesa. While he recovered enough to see further battle, he lost the use of his left hand.
Cervantes was captured by pirates in 1575 and taken to Algeria as a slave where after several unsuccessful escape attempts he was ransomed by Trinitarian friar Juan Gil in 1580. At age thirty- three he returned to Spain but was unable to find the usual employment for distinguished veterans. He began writing and produced a considerable amount of verse and plays as well as a novel, La Galatea, by 1585. During this time he also married, but was unhappy with, Dona Catalina de Palacois. Despite the amount of writing he was able to accomplish, he was unable to make a sufficient living from the sales of his work, and began to take government jobs, such as tax collector. He was subsequently imprisoned at least twice for controversial collection methods.
It was while in prison, however that Cervantes first conceived of the allegorical story of the adventures of Don Quixote, an idealistic gentleman obsessed with chivalrous deeds, and his realistic companion Sancho Panza. In 1605, the year when the first part of the story, The History of the Valorous and Wittie Knight-Errant Quixote of the Mancha, was published, Cervantes was living in poverty with his sisters, his niece and his illegitimate daughter Isabel Saavedra in Valladolid. Unfortunately while the story and its subsequent second part were immensely popular at the times of their publication and ever since, Cervantes did not ever profit significantly from the text, partly from poor management.
Don Quixote is deemed by many to be the first modern novel, holding a position of significant influence on ensuing prose fiction. Its enduring themes have since inspired, and have been represented in, operas, poems, films, a ballet and a modern day American musical (Man of La Mancha), as well as in the artwork of Honore Daumier and Gustave Dore. In print it has appeared in all modern languages, in over 700 editions.
In 1613 Cervantes published Novelas Ejemplares, a collection of short stories, followed by the second part of Don Quixote in 1615. His last work, Persiles y Sigismunda, another allegorical novel in whose prologue he foreshadows his own death, was finished just four days before he died in Madrid. |
[an error occurred while processing this directive]
BBC News
watch One-Minute World News
Last Updated: Monday, 1 October 2007, 18:14 GMT 19:14 UK
Arctic ice island breaks in half
By David Shukman
BBC science and environment correspondent
Camera crew (BBC)
The BBC team sets up on Ayles Ice Island
The giant Ayles Ice Island drifting off Canada's northern shores has broken in two - far earlier than expected.
In a season of record summer melting in the region, the two chunks have moved rapidly through the water - one of them covering 98km (61 miles) in a week.
Their progress has been tracked amid fears they could edge west towards oil and gas installations off Alaska.
The original Manhattan-sized berg (16km by five km; 10 miles by three miles) broke off the Ayles Ice Shelf in 2005.
Pictures from space show the parting of the ice blocks
I joined a team that landed on the ice island in May to carry out the first scientific investigation into what many see as a key indicator of global warming.
It is an unsettling thought that the very ice we landed on - and filmed on - for several hours has since ripped apart.
One of the scientists on that mission was Luke Copland of the University of Ottawa, and he told BBC News that the fact that the island had headed south was significant.
"The island became more vulnerable to breaking up with the warmer temperatures in more southerly latitudes, together with having less protection from the smaller amounts of surrounding sea-ice.
"It's relatively unusual for the ice island to drift so far south so quickly - many ice islands in the past have stayed within the Arctic Ocean, or within the northern parts of the Queen Elizabeth Islands."
Dr Copland said that the island had travelled so far south because of the small extent of Arctic ice this summer, influenced in turn by warmer conditions.
"The low sea-ice conditions this year have played a role. The sea-ice normally blocks ice inflow into the Queen Elizabeth Islands, but with less ice this year it has made it easier for the Ice Island to make its way in."
And his conclusion is clear: unlike ice islands which in the past might have lasted in the Arctic Ocean for 50 years or more, this one is destined to be shorter-lived.
"Ultimately, the ice island should break up faster because of the warmer temperatures - I'd be surprised if it lasted more than a decade or so."
Ayles Ice Island (BBC)
Pictured in May, this block of ice has now split into two pieces
The team which landed on the Ayles ice block in May found it to have an average thickness of 42-45m (138-148ft) - the equivalent of the height of a 10-storey building. The great mass of ice has now split apart.
Arctic sea-ice shrank to the smallest area on record this year, as measured by satellite.
The figure shattered all previous satellite surveys, including the previous record low of 5.32 million sq km measured in 2005.
Click here to see the Canadian Ice Service website tracking the ice blocks.
Arctic map (BBC)
The Ayles Ice Island calved off the Ayles Ice Shelf in August 2005
The calving event was the largest in at least the last 25 years
A total of 87.1 sq km (33.6 sq miles) of ice was lost in this event
The largest piece was 66.4 sq km (25.6 sq miles) in area
This made the slab a little larger than Manhattan
Limited sea-ice in 2007 has seen the berg move 100s of km
Ice withdrawal 'shatters record'
21 Sep 07 | Science/Nature
Vast ice island trapped in Arctic
31 Aug 07 | Science/Nature
Science team lands on Ice Island
22 May 07 | Science/Nature
Answers from the Arctic
20 May 07 | Have Your Say
Mission to Ice Island
24 May 07 | Science/Nature
The BBC is not responsible for the content of external internet sites
Has China's housing bubble burst?
How the world's oldest clove tree defied an empire
Why Royal Ballet principal Sergei Polunin quit
Americas Africa Europe Middle East South Asia Asia Pacific |
Facial Recognition Software and You
Yes, the future of Black Mirror is here
Pamela Pavliscak
Who hasn't yelled at their phone, or wept on their laptop? Like every part of their lives, human interactions with technology have an emotional component. Yet your computer doesn't know if you're happy or sad, and it can't alter its user interface to respond to tears or laughter.
The important word there is "yet." As a design researcher and founder of Change Sciences, Pamela Pavliscak works with firms innovating in effective computing, sensor development, and AI. However, she sees a gap in their approaches. "I'm one of these people that loves the complex, messy, weird world of human beings, and how we relate to technology is very emotional. It's not rational at all, yet all these companies I'm working with, and all the designs that we're engaged with, are focused on the rational."
The underlying issue is how smart people interact with their dumb machines, and that's been a clear concern since ELIZA, the original chatbot developed at MIT in 1964. Pavliscak said, "People knew that it was fake, yet they developed an attachment because they were having a conversation, and conversation yields an emotional bond."
To this point, the emphasis when it comes to emotion in computing has been on emulation and manipulation, not comprehension. When emotion is factored into design, the end game is stimulating moods in the user, whether it's how an iPhone lies in the hand, or the quick-fix ecstatic rage of social media. However, Pavliscak said, "What we're fast coming up against is, that's not going to cut it in this new world where technology is embedded in every moment of our day-to-day existence."
Creating a fake human has been a long-established way to make people feel at ease with their machines: After all, HAL 9000 may have been a murderous supercomputer, but his gentle tones made his cold-blooded killing seem almost pleasant. Pavliscak said, "We all share this vision of the future that is that white, pristine, Airbnb-style room where everything is automatically happening for us, and I just wonder: Where's the life in that, where's the emotion in that, and where's all the stuff that's contributing to our emotional well-being? I don't think that's a conversation many of us are ready to have."
For Pavliscak, it comes down to a very simple understanding: "Emotion isn't a destination. Emotion is context." The technology is in its infancy (and, she warned, "Spoiler alert: It doesn't work very well") but there are engineers and psychologists developing facial recognition software that gauges mood, and wearable tech that reads more into skin temperature and heart rate than just biometrics. She sees a desperate need to add the creative arts to the R&D mix, since "when we think where we are with our understanding of emotion, so much of it comes out of literature and the arts, and that's something that isn't a voice in the development of our technology."
The next step is implementation, where there is both great potential and great risk. There are already interesting developments in therapy, for example for people suffering from PTSD, or with a diagnosis on the Autism spectrum. With the right technology, she said, "Friends and family can identify what's going on, or they can self-identify emotions."
Yet there is also the shadow of the darkest timeline represented by 2002's Minority Report, in which commercials recognize and target individuals. That's not so fantastical, since the bulk of the existing patents on emotion-linked facial recognition are held by advertising agencies. Pavliscak said, "Imagine your refrigerator knowing that you're deeply depressed, and offering you ice cream. Or the AI knows that you're stressed out from work, so it holds off advertising sleep meds for a couple of hours so you can get your work done, because your boss has keyed in. You can spin out into some very dark tales."
When Your Internet Things Know How You Feel
March 10, 3:30pm, JW Marriott Salon FG
More by Richard Whittaker
DVDanger: House on Willow Street
Kidnapping meets possession, plus more home releases
March 25, 2017
So What Is Trainspotting Anyway?
Choose life. Choose a sequel. Choose director Danny Boyle.
March 24, 2017
When Your Internet Things Know How You Feel, SXSW 2017, SXSW Interactive 2017, Pamela Pavliscak
AC Daily, Events and Promotions, Luvdoc Answers
Breaking news, recommended events, and more
Official Chronicle events, promotions, and giveaways
Updates for SXSW 2017
All questions answered (satisfaction not guaranteed) |
↑ Return to Season Extension
Heat From Water
Wall-o-waters work to add early-season heat to vegetables. They collect warmth during the day from the sun, and then release it at night to keep plants warmer. They work well in my experience.
But wall-o-waters are an absolute pain to erect and to keep standing. I heard a suggestion once to open them up around a five gallon bucket, fill them water, then carry them to the plants. It helped, but I still don’t like them.
Using water to warm your vegetables can be done in other ways. Gallon milk jugs can be used. We buy laundry detergent in two-gallon plastic jugs, which are even taller for more heat and protection. Spray-painting these jugs black will add to their heat holding ability.
To provide more heat, five gallon plastic buckets can be used, or 32-gallon garbage pails. I set these next to a few of my tomato plants in my hoophouse. After cold temperatures have killed the tomatoes that aren’t near the water, the tomato plants with this warmth next to them are still looking green and beautiful.
A metal container is quicker at transferring the sun’s heat into the water, but plastic will also work. Both work best if they are painted black. Or you can place a large black trash bag over the barrels.
Large water bags are sold for greenhouse use. The merchant recommends at least three gallons of water for every square foot of greenhouse wall that admits sunlight. That’s a lot of water.
Other heat absorbing materials are rocks or concrete blocks, but water has about two-three times the heat holding capacity. |
Presented below are the assumptions, principles, and constraints used in this chapter.
1. Economic entity assumption
2. Going concern assumption
3. Monetary unit assumption
4. Periodicity assumption
5. Measurement principle (historical cost)
6. Measurement principle (fair value)
7. Expense recognition principle
8. Full disclosure principle
9. Cost - constraint
Identify by number the accounting assumption, principle, or constraint that describes each situation below. Do not use a number more than once.
(a) Permits the use of market value valuation in certain specific situations.
(b) Rationale why plant assets are not reported at liquidation value. (Do not use measurement principle.)
(c) Allocates expenses to revenues in the proper period.
(d) Indicates that personal and business record keeping should be separately maintained.
(e) Ensures that all relevant financial information is reported.
(f) Indicates that market value changes subsequent to purchase are not recorded in the accounts. (Do not use measurement principle.)
(g) Separates financial information into time periods for reporting purposes.
(h) Assumes that the dollar is the “measuring stick” used to report on financial performance.
• CreatedJune 07, 2013
• Files Included
Post your question |
Arabic Alchemy
Only available on StudyMode
• Download(s) : 281
• Published : September 28, 2011
Open Document
Text Preview
Arabic Alchemy
The origins of Arabic alchemy date back to the 7th century, when the Arabs started their territorial expansion. From India to Andalusia, empire and influence was drawn to the Arabs as a result of their expansion. Also included in their expansion were the contacts with ancient cultural traditions, which the Arabic culture absorbed and reinterpreted very readily. Alchemy had been practiced in ancient Greece, and Hellenistic Egypt, so when the Arabs expanded to Egypt, they found a strong alchemical tradition. Arabic alchemy relied on the doctrines derived from the multicultural milieu of Hellenistic Egypt. By the later part of the 8th century, Arabic knowledge of alchemy was at its peak, so a huge and impressive book about alchemical works was produced. However, even though lots of valuable information was known about alchemy, lots of information was also a mystery. The origins of alchemy are steeped in legend. The etymology for the word "alchemy" is still foggy and unclear, but there are some predictions about where the word could have come from. One of the most plausible origins for the word comes from the Egyptian word "Kam-it", or "Kem-it," which indicate the Land of Egypt, also known as the Black Land. The other less likely origins for the word are from the Greek word, khumeia or khemeia, meaning the art of melting metals and of producing alloys, or the Hebrew term, Kim Yah, meaning "divine science." Some alchemists thought that the mythological origins of alchemy could be attributed to the angels who fled from God, to go to the other biblical characters who could teach them the secrets of mining and metals. This idea helped dignify the origins of alchemy, so it wouldn’t be persecuted due o its close relation to magic. Other than the origins of alchemy, there were also many contributions from alchemy. Arabic alchemists contributed greatly to the history of alchemy. These alchemists a offered the very earliest descriptions of some of the...
tracking img |
- dotTech - https://dottech.org -
Students create greenhouse designed for Mars
popeye mars greenhouse [1]
In may of this year, NASA held its International Space Apps Challenge across several cities around the world. The event took place around the course of 48 hours in each city, but at the end of the day, only seven contenders had the chance of winning the top prize. The best in our eyes however, is the team that created a greenhouse designed for Mars.
The team is made up of Students from Greece, who are interested in creating a self-supporting greenhouse that is capable of growing spinach on the red planet’s surface. It’s a pretty clever idea, one that could benefit the human race if NASA takes a serious interest in what could be. At the moment, humans are finding it more difficult each day to locate farm lands, in many occasions, we take lands away from wildlife, which puts them in danger of being extinct.
So, how does this thing work? Well, it is solar-powered and has a protective dome to keep plant life safe from whatever radiation and heat that is located on Mars. Furthermore, team claim this bad boy is designed to grow spinach over a 45-day period.
As of now, the Popeye (that’s what they call it,) is designed to feed Astronauts on Mars, but we are certain this piece of tech would be used for so much more as technology further advances in the future.
[via Reuters [2]] |
Talk:Anatolic Theme
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Άνατολικόν [θέμα] and θέμα Άνατολικῶν[edit]
Can anyone explain the relation between these two phrases? I know Άνατολικῶν is plural, genitive. And I think Άνατολικόν is singular. But shouldn't Άνατολικόν and Άνατολικῶνhave the same number form? --Qijiang ok (talk) 21:49, 1 January 2015 (UTC)
The reason is that the meaning is different in the two cases: Άνατολικόν θέμα means as much as "the Eastern theme", θέμα Άνατολικῶν "the theme of the Easterners". The latter is the original name, but the former also appears in Byz. and modern literature. Constantine 10:03, 2 January 2015 (UTC)
Did Byzantines call it Άνατολικοι? Just like Byzantines said of Ὀπτιμάτοι and θέμα Ὀπτιμάτων. Qijiang ok (talk) 09:48, 24 June 2015 (UTC)
Yes, Άνατολικοί ("the Easterners") would be the plural nominative form of the name. Constantine 14:04, 24 June 2015 (UTC)
In wiki of Anatolia, it says Greek name of Anatolia is Ἀνατολή, and Anatolia means "East". So, what does Άνατολικόν and Άνατολικῶν mean? Easterner(s)? In this way, Άνατολικόν θέμα means "Easterner theme", θέμα Άνατολικῶν means "theme of Easterners". Am I right? Another question: Are Άνατολικόν [θέμα], θέμα Άνατολικῶν, Άνατολικοί all correct names of this theme for Byzantines? Qijiang ok (talk) 14:05, 27 June 2015 (UTC)
On the meaning, yes, look above. "Άνατολικόν θέμα" would be "Eastern theme". It would not usually be used simply as Άνατολικόν (i.e. without the θέμα). Άνατολικοί would usually refer to the people and/or the army of the theme, rather than the theme itself. Constantine 15:48, 27 June 2015 (UTC)
External links modified[edit]
Hello fellow Wikipedians,
I have just modified 11 external links on Anatolic Theme. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
Cheers.—InternetArchiveBot (Report bug) 12:59, 12 October 2016 (UTC) |
A password will be e-mailed to you.
Adolf Hitler Background
Adolf Hitler
Adolf Hitler
Adolf Hitler is probably one of the most infamous people of all time, the Fuehrer of Nazi Germany and chief orchestrator of the Holocaust, in which over 6 million Jews are reported to have died. He was the instigator of World War II and leader of the Nazi party. He gained popularity in Germany after promoting Antisemitism, Pan Germanism and Anti-Communism. Adolf Hitler believed Germans to be superior to other races, who were viewed as inferior. He also denounced Capitalism, as well as Communism, as being parts of a Jewish Conspiracy.
Towards the end of World War II the interference of Hitler into the strategic offensive in Russia cost the Axis power dearly. His refusal to allow forces to withdraw at the battle of Stalingrad led to the death of 200,000 Axis soldiers with over 235,000 taken prisoner. His military judgment only became more and more erratic as time went on. Between 1939 and 1945 there were multiple attacks on the life of Hitler, the most famous being the attempt by Claus von Stauffenberg who planted a bomb in Hitler’s headquarters. Hitler narrowly survived as one of his aides removed his briefcase behind the leg of a heavy conference table, where the bomb was stored. This led to sharp reprisals from Hitler, with over 4900 executions.
Claus von Stauffenberg
Claus von Stauffenberg
By the end of 1944 the Reds and the Allies were advancing through Germany. After the Ardennes offensive failed Hitler ordered the destruction of all military devices and industrial infrastructure before it could fall into the hands of the Allies. This command was disobeyed at the last by Albert Speer, the Minister for Armaments. Hitler knew his time had come and on April 20th 1956 made his last trip to the surface from his underground bunker. With the Soviets encircling Berlin Hitler commanded Waffen SS General Felix Steiner to launch the offensive. Upon finding out that Steiner disobeyed he ordered everyone except Wilhelm Keitel, Alfred Jodl, Hans Krebs, and Wilhelm Burgdorf to leave the room, and went on a now famous tirade against the treachery of his commanders. He ended his speech stating that he would stay in Berlin until the end and shoot himself, having finally given up hope.
Adolf Hitler and Eva Braun
Adolf Hitler and Eva Braun
On April 29th he married Eva Braun and indicated his will to his secretary. The events were witnessed and documents signed in the presence of Krebs, Burgdorf, Goebbels, and Bormann. On the 30th April 1945 Hitler shot himself and Eva Braun bit into a cyanide capsule. Hitlers body was burned and all that could be used to identify the Fuehrer was his lower jaw. It should be noted that most theories to the contrary, that the Fuehrer and Eva escaped the city, are regarded as fringe theories by the vast majority of historians. Heinz Linge, Hitler’s valet, heard a gunshot and found the Fuehrer dead next to his newly married wife who had apparently taken Cyanide, and Hitler had shot himself. However there is some debate as to how he died and who found him. Not only are the German accounts not trusted, neither are the accounts of the Soviets who were the first on the scene. They were not allies of the West as such, it was more than they had a common enemy. There was a race to Hitler’s bunker which was won by the Soviets.
Main Hitler Conspiracy – Adolf Hitler fakes death
A theory that surrounds the death of every famous figure. There are a number of offshoots to this theory. It is often contended that declassified FBI documents contain a number of alleged sightings of Hitler. However the FBI states within these documents that the claims cannot be substantiated.
Gustav WelerIt has been contended that the bodies that were shot and found were doubles, though there is no evidence that such is the case, merely claims. However it is well known that Hitler did in fact have a double, Gustav Weler. It is also pointed out that in 2009 a skull fragment with a bullet hole was not actually the skull of Hitler but that of a young woman. However this point is moot as while this skull was widely believed to be the skull of Hitler, no Soviet or Russian Officials ever claimed that it was. The dental remains were what actually confirmed the fragments to be the remains of Adolf Hitler and Eva Braun.
Adolf Hitler fakes Death – Hitler in Argentina
Grey Wolf: The Escape of Adolf Hitler 2011
Grey Wolf: The Escape of Adolf Hitler 2011 – Check on Amazon
The book titled Grey Wolf: The Escape of Adolf Hitler 2011 is the most popular piece of literature documenting the escape of Hitler from Berlin to Argentina. The book was written by British authors Simon Dunstan and Gerrard Williams. The book is founded on the idea that Hitler was hugely rich from royalties from his book Mein Kampf, which was mandatory reading for all those within the third Reich and that the Nazis had stolen art and gold from occupied countries. This wealth was funneled by the Fuehrer to Argentina where he made his escape. The book contends that he died in 1962 after living in a Bavarian style mansion in Inalco, just off the Chilean border. The contention of the authors is that there is no video or photography of the death and that those who testified had an interest in keeping the escape of Adolf Hitler a secret. The main contentions are:
1. Hitler had the means to escape the city and huge resources, being the richest man in Europe.
2. Confirmed records of U-Boat landings off the coast of Patagonia.
3. Written testimonies of Argentinian people who saw him or worked for him (attendant, cook, nurse etc).
4. FBI documents which confess that Hitler might be alive in Argentina.
This book is widely ridiculed by most historians and has no substantiation. Again to keep such a deception quiet would require the consent of so many people over such a long time period to be regarded as entirely implausible. Historian Guy Walters was very outspoken about the book describing it as “rubbish”, adding: “There’s no substance to it at all. It appeals to the deluded fantasies of conspiracy theorists” .
In 2014 a controversial documentary was produced by Gerrard Williams called Grey Wolf. It features a number of people who claim to have seen Hitler in Argentina.
Both the book and the film Grey Wolf have been backed up by a retired CIA official who has claimed that Adolf Hitler faked his own death. The claim is that Hitler made his way to Argentina initially by Luftwaffe plane and then by a submarine which collected him at the Canary Islands and carried him to the coast of Patagonia. Bob Baer and his team claim to have access to over 700 pages of classified documentation that have recently been declassified. One document states that British Intelligence were aware that Hitler was flown from Berlin via a Luftwaffe plane. Baer and his team believe that the corpse found was a double and that it was found to be 5 inches shorter than Hitler. The body was initially discovered by the Russians. It is also claimed that there was a fifth exit from the bunker which was not found, though it is hard to believe than an exit would simply be not found in a case as important as this. It is also contended by many theorists of this line of thought that many Nazis were not sentenced and all made a mass exodus to South America where they continued their plans and have since infiltrated Corporate America and in particular the pharmaceutical Industry. It is true that tens of thousand of Nazis escaped and were not sentenced, including Adolf Eichmann and Josef Mengele.
Adolf Hitler Fakes Death – Hitler in Brazil
In another version of this theory Hitler lived well into his 90’s and went to Paraguay and then Brazil. He came to be known as “The Old German” in the town of Nossa Senhora do Livramento after adopting the name Adolf Leipzig (Why he would keep his first name is not explained in this theory). Brazilian Jew Simoni Renee Guerreiro Dias wrote the book Hitler in Brazil – His Life and Death, claiming that Hitler went to Brazil looking for buried treasure given to him by associates in the Vatican. She contends that Leipzig’s remains be exhumed and his DNA tested against the living relatives of Hitler. She further claims that he chose the name Leipzig as it was the birthplace of Hitler’s favorite composer. Johnann Sebastian Bach. She photoshopped a picture of Adolf Leipzig with a mustache and used it as evidence that it was Adolf Hitler. Historians and academics have also dismissed the idea that Hitler came to Brazil after his alleged escape from Berlin. There is simply no substance whatsoever to the theory that Hitler fled to Brazil. Contrary to the official story which indicates Heinz Linge as the man to find Hitler, this book indicates that it was his bodyguard Rochus Misch, who said he heard a gunshot and came in to see the Führer slumped dead over the table. He died in 2016. This picture is cited as evidence that Hitler fled to Brazil where he changed his name to Adolf Leipzig.
Adolf Hitler fakes death – Convincing Evidence
Hitler most likely died in the bunker in 1945. However it is impossible to know for sure. There are a few key points to consider:
• The Soviets were the first to arrive on the scene. Thus the information went from those closest to Hitler ( four Nazis who were utterly devoted to Hitler), to the Soviets, who passed on information to the Allies, who issued headlines to the public. It would be naive to believe that the information was not altered by one of these parties before reaching public eyes and ears.
• At the time of Hitlers death media outlets all over the world were printing headlines asking whether or not Hitler really died, including the New York Times.
• There was a report in a Swedish newspaper in April 26th indicating that a double had been put in Hitler’s place in order to “die on the barricades”. The paper, citing the free German Press Service, said that the man had been trained to talk and act like Hitler in every possible way. Hitler was known to have had at least 6 doubles.
• When Stalin was asked by President Truman whether or not Hitler was dead, he simply replied “No”. Stalin’s top officer, Georgi Konstantinovitch Zukhov , reported that “We have found no corpse that could be Hitler’s”
• No German witnesses ever saw or identified the remains said to be Hitler’s, but this could not be due to a shortage of witnesses. This is in sharp contrast to that of Nazi Joseph Goebbels, who had 20 Germans line up to identify his body(Goebbles had committed suicide with his wife and children).
• The body of Joseph Goebbels was put on display and photographed from all angles. There was but a single photo taken of Hitler. Similarly there were numerous photographs taken of President Kennedy and General Gaddafi. To quench any possibility of survival myths taking palce such photographs have to be taken. This was not done in Adolf Hitler’s case.
Adolf Hitler Fringe Conspiracy Theories
There are also a number of theories which have emerged that are much further out than a mere escape. One such theory relates to that of black magician Aleister Crowley. Crowley claimed that Adolf Hitler was his magic child, a term used to describe the produce of an unguarded psyche. The demoralization of Germany was the goal of this dark magician –
“It was necessary to persuade the Germans that arrogance and violence were sound policy, that bad faith was the cleverest diplomacy, that insult was the true meaning of winning friendship, and direct injury the proper conjuration to call up gratitude.”
Inquiries into such theories involve the occult and black magic which invariably cannot be proven. Another theory is that Adolf Hitler did not have any hatred whatsoever of Jews until after his treatment. He had earned considerable awards and honors from serving in the Bavarian army, and his speeches changed from 1919 onward. Hitler was hospitalized after being blinded with mustard gas in World War One. He miraculously regained his sight and only decided to enter politics while in the hospital. In 1918 the Louisiana Journal of Forensic Science indicated that a hypnotic suggestion given to Hitler during treatment could have been what instilled the belief that he was meant to rule the world and eradicate the Jews. It was said to be a miracle that he was able to see again. It is agreed by psycho historians that the biggest behavioral change in Hitler took place after his treatment. He found new qualities of ceremonial speech and rhetoric, and lost the talent he once had for painting. And the theory contends that the hatred Hitler had with regard to Jews was implanted there by brain washing techniques, drugs and hypnosis.
Adolf Hitler fakes Death – Conclusion
When all the evidence is taken together and analyzed it seems that, on the balance of probabilities, it’s around 50/50. There are a number of questions surrounding Hitler’s death which do not add up. However there is no conclusive evidence of an escape. The balance of probabilities might swing just in the balance of the official story, but not by a wide margin due to the huge number of question marks. And also due to the correct identification of Hitler via dental fragments. This has not been dealt with by any theorists accurately and until it is we will have to assume that the remains are Hitlers and his death was a suicide. |
senses (smell, taste, and so on)
Home > Preview
The flashcards below were created by user Hbottorff on FreezingBlue Flashcards.
1. the sense of smell
2. dissolved chemicals that stimulate the olfactory receptors
3. sensory neurons withing the olfactory organ
Olfactory Receptor cells
4. these nerve axons collect in the cribriform plate and carry impulses along the olfactory tract
Olfactory nerve fibers
5. Axons leaving the olfactory epithelium collect into bundles in here
Cribiform plate
6. the first synapse of smelling occurs here
olfactory bulb
7. axons leaving the olfactory bulb carrying impulses follow this to the cortex
Olfactory tract
8. The final place in the olfactory cortex the smells are integrated (x2)
Limbic and hypothalumus
9. the sense of taste
10. the other name for gustatory receptors distributed across the tongue, throat, and larynx
Taste receptors
11. one type of taste sensation that is characterized by pleasant savory tastes.
12. this taste sensation receptors are sensitive to amino acids and small peptides and nucleotides
13. This sensory receptor of the pharynx sends information to the hypothalamus and affects water balance and the regulation of blood volume
Water Receptors
14. the primary taste sensations are
sweet, salty, sour and bitter
15. G proteins found in taste receptors that experience sweet, bitter and umami sensations are called
16. gustatory cells are stimulated by
dissolved chemicals
17. Where are the sensory cells located for equilibrium and hearing?
internal ear
18. What are the receptor cells of the internal ear called
hair cells
19. Why are hair cells called "hair cells"
the surfaces are covered with processes similar to cilia and microvilli
20. What type of sensory receptor are hair cells
21. what is the name of the support cells associated with the hair cells
22. the ear is divided into three anatomical regions, what are they
External, Middle, and Inner
23. This structure consists of elastic cartilage and is used to collect sound waves and funnel them into the external acoustic meatus
24. this is the visible portion of the ear which collects and directs sound waves toward the middle ear
external ear
25. this portion of the ear goes by two names that describe the air filled cavity separated from the external acoustic meatus by means of the tympanic membrane . what are the two names
tympanum or eardrum
26. this portion of the ear contains the sensory organs for hearing and equilibrium
internal ear
27. this is the passage way within the temporal bone thru which sound waves travel from the external ear to the tympanic membrane
external acoustic meatus
28. these are glands which secret ear wax
ceruminous glands
29. what is the wax in the ear called
30. what are the two main functions of the cerumen
slow the growth of microorganisms and keep bugs and debris out of the ear
31. this structure known as the eardrum lies at the end of the external auditory meatus and separates the external ear from the inner ear
tympanic membrane
32. this structure permits pressure equalization on either side of the tympanic membrane
auditory tube
33. what is a middle ear infection called
otitis media
34. the inner ear contains three auditory ossicle bones. in order what are they
malleus, incus, and stapes
35. this is a collection of fluid filled tubes and chambers inthe inner ear
membranous labyrinth
36. the membranous labyrinth contains fluid called
37. this is a shell of dense bone that surrounds and protects the membranous labyrinth
bony labyrinth
38. the bony labyrinth is composed of three parts which are
semicircular canals, the vesibule, and the cochlea
39. this part of the inner ear provides equilibrium sensations by detecting rotation, gravity, and acceleration.
vestibular complex
40. this portion of the vestibular complex consists of three semicircular canals which detects rotational movements of the head in three different planes
semicircular ducts
41. this specific portion of the vestibular complex consists of two chambers with receptors that are sensitive to head position relative to gravity and linear acceleration
utricle and saccule
42. the movement of the stapes at the oval window generates pressure waves that stimulate hair cells at specific locations alonf the length of this structure and is used in hearing*
cochlear duct
43. the hair cells of the utricle and saccule are clustered in oval structures called
44. in regards to the eye located in the retina, these see colors of black and white, highly sensitive, they enable us to see in dimly lit rooms, at twilight and pale moonlight.
45. this part of the retina provides us with color vision, giving us sharper, clearer images. but these require more light to see.
46. this light sensitive pigment is found inthe rods
47. this light sensitive pigment is found in the cones
48. this kind of vision is less than 20 ft. away and the lenses can actually change and adjust
49. the region of the ampulla within the ear that contains the receptors is known as the
crista ampullaris
50. in regards tothe ear each crista ampullaris is bound to a flexible gelatinous structure called the
51. the fluid found in the ear is called
52. hair cell processes are bound to gelatinous mebrane called
otolithic membrane
53. the otolithic membrane contains densely packed calcium carbonate crystals called
54. what are the accesory structures of the eye
eyelids, eyelashes, and the epithelium of the eye
55. this is a transparent area on the surface of the eye through which light travels to the inner eye
56. this is the opening at the center of the colored iris which light passes into the eye after it passes the cornea
57. this is the covering of the inner surface of the eyelids and outer surface of the eye
Conjunctiva (palpebral [eyelids] and ocular [eye surface])
58. this structure produces, distributes and removes tears.
lacrimal apparatus
59. the lacrimal apparatus is composed of what six structures
lacrimal gland, tear ducts, lacrimal puncta, lacrimal canaliculi, lacrimal sac, nasolacrimal duct.
60. this structure produces tears that lubricate, nourish, and oxyginate the corneal cells
lacrimal gland
61. this is an antibacterial enzyme found in tears
62. this structure delivers tears from the lacrimal gland to the space behind the upper eyelid
tear ducts
63. this structure consists of two small pores that drain the lacrimal lake
lacrimal puncta
64. this is a small canal that connects the lacrimal puncta to the lacrimal sac
lacrimal canaliculi
65. this structure is a small chamber that nestles within the lacrimla sulcus of the orbit
lacrimal sac
66. this structure orginates at the inferior tip of the lacrimal sac and allows tears to pass through it into the nasal cavity
nasolacrimal duct
67. what is the name of the disease which causes inflamation of the conjuctiva
conjuctivitis aka pink eye
68. the wall of the eye has three layers
the fiberous, vascular, and inner
69. the outer most layer of the eyeball which consists of the cornea and sclera is
fibrous layer
70. what are the three main functions of the fibrous layer
supports/protects, attatchment sight for muscles, and contains the cornea
71. this layer contains numerous blood vessels, lymphatic vessels, smooth muscles of the eye
vascular layer aka uvea
72. this is the inner most layer of the eye where the light energy is collected
inner layer or retina
73. what are the 4 functions of the vascular layer
blood vessel route, regulating light, regulating the aqueous humor, controlling lens shape (essential in focusing)
74. the vascular layer is composed of three structures
iris, ciliary body, and choroid
75. this stucture gives eyes their color and control the pupil size
76. this structure acts as an anchor for the suspensory ligaments which hold the lens in place
ciliary body
77. this structure is covered by the sclera and has capillary network that delivers nutrients to the neutral tissue within the neutral layer.
78. these cells within the eye are sensitive to light
79. the ciliary body and lens divides the eye into two structures what are they
ciliary muscle and processes
80. the anterior cavity of the eye is divided into two structures
anterior and posterior chambers
81. what fluid is found in the anterior cavity
aqueous humor
82. what fluid is found in the posterior cavity
vitreous humor
Card Set Information
senses (smell, taste, and so on)
2014-05-04 20:04:15
smell, hear, taste,....
Show Answers:
What would you like to do?
Home > Flashcards > Print Preview |
Properties of minerals
Published on
• Be the first to comment
No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds
No notes for slide
Properties of minerals
1. 1. Properties of Minerals
2. 2. What is a mineral? • A naturally occurring, inorganic solid that has a crystal structure and a definite chemical composition. • More than 3,000 identified minerals. • About 20 minerals make up most of the Earth’s crust.
3. 3. Characteristics of a mineral 1. Naturally occurring 2. Inorganic 3. Solid 4. Crystal structure 5. Definite Chemical composition.
4. 4. Naturally Occurring • Mineral must occur naturally on Earth – Gold, copper, silver, graphite
5. 5. Inorganic • The mineral cannot arise from materials that were once part of a living thing • Coal occurs naturally in the Earth’s crust, but it comes from the remains of plants and animals that lived millions of years ago.
6. 6. Solid • A mineral is always solid, with a definite volume and shape.
7. 7. Crystal Structure • The particles of a mineral line up in a pattern that repeats over and over again. • A crystal has flat sides, called faces, that meet at sharp edges.
8. 8. Definite Chemical Composition • A mineral always contains certain elements in definite proportions – For example, the mineral of quartz has one atom of silicon for every to atoms of oxygen.
9. 9. How do we identify a mineral? • Each mineral has its own specific properties that can be used to identify it. 1. Hardness 2. Color 3. Streak 4. Luster 5. Density
10. 10. Hardness • In 1812, Friedrich Mohs, a mineral expert, invented a test to describe and compare the hardness of minerals. • The scale ranks ten minerals from softest to hardest. • A mineral can scratch any mineral softer than itself.
11. 11. Mohs Hardness Scale
12. 12. Color • Color can be used to identify only those few minerals that always have their own characteristic color. – Malachite is always green – Azurite is always blue • Many minerals, however, like quartz, can occur in a variety of colors.
13. 13. Streak • A streak test can provide a clue to a minerals identity. • The streak of a mineral is the color of its powder. • You can observe a streak by rubbing a mineral against a streak plate.
14. 14. Luster • Luster is the way a mineral reflects light from its surface. • Minerals containing metals are often shiny. • Other minerals, such as quartz, have a glassy luster.
15. 15. Density • No matter what the size of a mineral, the density of that mineral always remains the same. • You must determine the mass of the mineral (on a balance) • You then place the mineral in water, to see how much it displaces. • The volume of the displaced water, equals the volume of the mineral.
16. 16. Testing Density Rocks mass = 300 ounces Displaces water by 100 cm3 So volume of rock must be 100 cm3 D = MU 300 V 100 D = 100 g/cm3 |
Here is one of the great existential problems that caused a lot of 20th century angst: the more we learn about the universe, the more we realize how insignificant we are.
Here is 21st century science's answer to that: Humans are very sophisticated conglomerates of materials. We are very special indeed.
In the video below, Dartmouth professor Marcelo Gleiser walks us through some of the key milestones in our understanding of the universe, beginning with Copernicus telling us we are not at the center of the solar system, aka "the universe" as it was understood at the time. From Copernicus on, we grew less and less significant. Good heavens, our solar system is not even the center of the universe! Next, even our Milky Way galaxy is no big deal.
The universe is expanding, and we are learning that we are smaller and smaller. Thanks a lot, science!
And yet, as Gleiser explains, we are also learning about life in the universe. While there may be Earth-like planets, we know just how special the conditions must be for the existence of intelligent life. Consider, for instance, that humans are able to think about who we are and ask questions about the universe.
Amazingly, the more we learn about the universe, the more we move back to the center again.
Watch the video here:
Image courtesy of Shutterstock
Follow Daniel Honan on Twitter @Daniel Honan |
Plastic Recycling Codes
So what do all those numbers and letters mean on plastic products:
Plastic Recycling Code 1 Polyethylene Terephthalate (PET, PETE). PET is clear, tough, and has good gas and moisture barrier properties. Commonly used in soft drink bottles and many injection molded consumer product containers. Other applications include strapping and both food and non-food containers. Cleaned, recycled PET flakes and pellets are in great demand for spinning fiber for carpet yarns, producing fiberfill and geo-textiles. Nickname: Polyester.
USES: Plastic soft drink, water, sports drink, beer, mouthwash, catsup and salad dressing bottles. Peanut butter, pickle, jelly and jam jars. Ovenable film and ovenable prepared food trays.
Potential problems: PET/PETE degrades with use, and wrinkled surfaces can host germs--as can backwash. PET/PETE bottles can contain trace amount of Bisphenol A (BPA), a synthetic chemical that interferes with the body's natural hormonal messaging system. BPA has been linked to breast and uterine cancer, an increased risk of miscarriage, and decreased testosterone levels. his problem is amplified when the container is filled with hot liquids or exposed to high heat such as being left in a car.
Plastic Recycling Code 2 High Density Polyethylene (HDPE). HDPE is used to make bottles for milk, juice, water and laundry products. Unpigmented bottles are translucent, have good barrier properties and stiffness, and are well suited to packaging products with a short shelf life such as milk. Because HDPE has good chemical resistance, it is used for packaging many household and industrial chemicals such as detergents and bleach. Pigmented HDPE bottles have better stress crack resistance than unpigmented HDPE bottles.
USES: Milk, water, juice, cosmetic, shampoo, dish and laundry detergent bottles; yogurt and margarine tubs; cereal box liners; grocery, trash and retail bags.
Potential Problems: No known problems.
Plastic Recycling Code 3 Vinyl (Polyvinyl Chloride or PVC): In addition to its stable physical properties, PVC has excellent chemical resistance, good weatherability, flow characteristics and stable electrical properties. The diverse slate of vinyl products can be broadly divided into rigid and flexible materials. Bottles and packaging sheet are major rigid markets, but it is also widely used in the construction market for such applications as pipes and fittings, siding, carpet backing and windows. Flexible vinyl is used in wire and cable insulation, film and sheet, floor coverings synthetic leather products, coatings, blood bags, medical tubing and many other applications.
USES: Clear food and non-food packaging, medical tubing, wire and cable insulation, film and sheet, construction products such as pipes, fittings, siding, floor tiles, carpet backing and window frames.
Potential Problems: Contains numerous toxic chemicals called adipates and phthalates ("plasticizers"), which are used to soften brittle PVC into a more flexible form. PVC is commonly used to package foods and liquids, ubiquitous in children's toys and teethers, plumbing and building materials, and in everything from cosmetics to shower curtains. Traces of these chemicals can leach out of PVC when it comes into contact with food. Vinyl chloride (the VC in PVC), as a known human carcinogen. The European Union has banned the use of DEHP (di-2-ethylhexyl phthalate), the most widely used plasticizer in PVC, and in children's toys.
Plastic Recycling Code 4 Low Density Polyethylene (LDPE).Used predominately in film applications due to its toughness, flexibility and relative transparency, making it popular for use in applications where heat sealing is necessary. LDPE is also used to manufacture some flexible lids and bottles and it is used in wire and cable applications
USES: Dry cleaning, bread and frozen food bags, squeezable bottles, e.g. honey, mustard.
USES: Catsup bottles, yogurt containers and margarine tubs, medicine bottles.
USES: Three and five gallon reusable water bottles, some citrus juice and catsup bottles.
Potential Problems: Studies show polycarbonates can also leach the potentially harmful synthetic hormone Bisphenol A (BPA). This problem is amplified when the container is filled with hot liquids or exposed to high heat such as being left in a car.
SIGG bottles are recyclable, reusable, and BPA-free. Perfect for folks who think green and drink clean. SIGG bottles are lined on the inside with a water-based, non-toxic epoxy resin to minimize unwanted tastes and scents. Each is also made from a single piece of aluminum, making bottles ultra-lightweight yet rugged and crack-resistant. Lifetime guarantee.
Stainless Steel Lining - Ceramic Outer Shell
Gasket Sealed lid - 15oz Hot/Cold
What will be YOUR LEGACY?
What will be YOUR LEGACY?
The Truth About Plastic
Plastic Containers
Kang Kim for TIME
Did you know people replace cell phones on an average of every 9 months and many of the components are toxic?! Now, thanks to to numerous organizations, you can recycle your cell phone for free and benefit those in need at the same time:
U.S. Troops
Victims of Violent Crimes
Animals - Petco Foundation
What's New!
The latest H2NO.org design shown here on a tote is now available at the H2NO.org store.
Think Green Tote
Sites We Like
And Think You Will Too.
These are not necessarily environmental sites - we just wanted to share some of our favorites.
(Links open in a new window)
If you love (or even like) our H2NO Teddy Bear, You'll love From Our Heart - Gifts & Collectibles..
In addition to the H2NO.org store, Cafe Press has numerous stores where all items are made-to-order so there is no waste.
if (window!= top) top.location.href=location.href |
The following numbers explain the current emergency. In January 2001, the national debt was $5.7 trillion. By January 2009, it had risen to $10.6 trillion. A year ago, it was $15.2 trillion. Now it's $16.4 trillion. We are hitting the debt ceiling again.
According to an article yesterday in the Washington Post:
House Speaker John A. Boehner (R-Ohio) ... insisted that Republicans hold the line, telling his members they must demand that every dollar they raise the debt limit be paired with commensurate spending cuts.
What Congress could do is this. Authorize a debt ceiling increase of one trillion dollars over two years. Of course, if that's all Congress does, then the country would probably burn through the money long before the two years are up, and we'd be back to square one. That's why Congress should require that the ceiling rise GRADUALLY over the next two years. Let the ceiling rise $47 billion per month in the first year, and $37 billion per month in the second year, but no more. That adds up to about one trillion.
So, during 2013, the debt ceiling would rise gradually by $47 billion per month. Does that mean we would default on any loans? Of course not. Spending would have to be cut, but that doesn't mean that any interest payments on the national debt would be cut. After paying interest on the national debt, plenty of money would be left over to help fund the government.
This new strategy would give considerable discretion to the executive branch regarding spending, but the President could not spend on anything that's not been approved by Congress. If the President abuses his discretion, then Congress could always pass a new law to remedy the situation. I don't like giving discretion to the President, but discretion where to cut is much better than not cutting at all.
The first debt ceiling was introduced in 1917, during World War I. Before then, Congress had to approve every new issuance of debt, and I'm not suggesting that Congress do so again. The Budget Control Act of 2011 approved incremental increases of the debt limit, but not the kind of monthly gradual increases that I'm suggesting.
|
It took three hackers less than a day to decipher the majority of a list of 16,000 encrypted passwords, all because of the laughably easy-to-crack passwords most of us pick to protect our online lives. The most successful guy got 90 percent of the "plains," as hackers call deciphered passwords in 20 hours; the least successful guy just 62 percent of them in about an hour. Yes, it's really that easy. But, rather than sit there, shocked at how little security passwords provide, we should use this Ars Technica article as a lesson in password security. And, the first lesson learned therein is: Never, ever use a six character password.
Rule 1: Six characters is too always too short. The very easiest and the first thing all of Ars's hackers did was guess your super weak six character passwords, via what's called a "brute force" attack. See, the most successful of the hackers, Jeremi Gosney, a password expert with Stricture Consulting Group, hacked 62 percent of the list in sixteen minutes because that's how easy it is to guess a code that's just six letters long:
Gosney's first stage cracked 10,233 hashes, or 62 percent of the leaked list, in just 16 minutes. It started with a brute-force crack for all passwords containing one to six characters, meaning his computer tried every possible combination starting with "a" and ending with "//////." Because guesses have a maximum length of six and are comprised of 95 characters—that's 26 lower-case letters, 26 upper-case letters, 10 digits, and 33 symbols—there are a manageable number of total guesses. This is calculated by adding the sum of 956 + 955 + 954 + 953 + 952 + 95. It took him just two minutes and 32 seconds to complete the round, and it yielded the first 1,316 plains of the exercise.
Rule 2: So is a seven- and eight-character password, probably. After doing almost nothing to guess six-character passwords, it gets a tiny bit harder for hackers, but not much. For example, Gosney then did more of these types of guessing attacks with different permutations of longer possibilities, trying seven or eight character passwords with only lower case letters, for example. That technique takes mere seconds, and in this case revealed many additional "plains."
Rule 3: "Salting" doesn't make six character passwords strong. Many sites boast that their password protection technology uses "salting," meaning it adds random numbers to password hashes thus making it harder for hackers to figure out the original code of these shorter passwords using those brute force attacks. Turns out that's not really that true:
But the thing about salting is this: it slows down cracking only by a multiple of the number of unique salts in a given list. That means the benefit of salting diminishes with each cracked hash. By cracking the weakest passwords as quickly as possible first (an optimization offered by Hashcat) crackers can greatly diminish the minimal amount of protection salting might provide against cracking.
Plus, a lot of sites don't use salting. So, again: See rules 1 and 2.
Rule 4: Don't use real words. The least successful of the hackers, who goes by the handle Radix, guessed 62 percent of the list in about an hour, using a custom compiled dictionary of popular passwords. Just by using a publicly available list of plain text passwords, called the Rock You list, he got 30 percent of the insecure codes and all because a lot of people use the same, common words in their passwords.
Rule 5: Just make an 11 character password already. Those first few hacks done by Gosney and Radix are basically password hunting for amateurs. With a couple slightly more sophisticated techniques, bigger graphics cards, and a little more experience, even codes that follow some of the "best practices" get hacked. The very best way not to fall prey to that, however is to create super long, strings of gibberish. As this chart below shows, it gets exponentially harder to crack a code after 8 characters. Ars says use 11 just to be safe: "Readers should take pains to make sure their passwords are a minimum of 11 characters, contain upper- and lower-case letters, numbers, and letters, and aren't part of a pattern."
Image via Shutterstock by Pavel Ignatov |
Place:Santiago, Santiago, Santiago, Chile
Alt namesSantiago de Chilesource: Wikipedia
Santiago del Nuevo Extremosource: Encyclopædia Britannica (1988) X, 432
Coordinates33.438°S 70.651°W
Located inSantiago, Santiago, Chile (1450 - )
source: Getty Thesaurus of Geographic Names
source: Family History Library Catalog
the text in this section is copied from an article in Wikipedia
Santiago, also Santiago de Chile , is the capital and largest city of Chile. It is also the center of its largest conurbation. Santiago is located in the country's central valley, at an elevation of above mean sea level.
Santiago is named after the biblical figure St. James.
the text in this section is copied from an article in Wikipedia
Founding of the city
Having been sent by Francisco Pizarro from Peru and make the long journey from Cuzco, Extremadura conquistador Pedro de Valdivia reached the valley of the Mapocho on 13 December 1540. The hosts of Valdivia camped by the river in the slopes (slopes) of the Tupahue hill and slowly began to interact with the picunches Indians who inhabited the area. Valdivia later summoned the chiefs of the area to a parliament, where he explained his intention to found a city on behalf of the king Carlos I of Spain, which would be the capital of his governorship of Nueva Extremadura. The Indians accepted and even recommendedthe foundation of the town on a small island between two branches of the river next to a small hill called Huelén.
Valdivia left months later to the south with his troops, beginning the War of Arauco. Santiago was left unprotected. The indigenous hosts of Michimalonco used this to their advantage, and attacked the fledgling city. On September 11, 1541, the city was destroyed by the Indians, but the 55 Spanish Garrison managed to defeat the attackers. Apparently, the resistance was led by Inés de Suárez, a mistress to Valdivia. The city would be slowly rebuilt, giving prominence to the newly founded Concepción, where the Royal Audiencia of Chile was then founded in 1565. However, the constant danger faced by Concepción, due partly to its proximity to the War of Arauco and also to a succession of devastating earthquakes, would not allow the definitive establishment of the Royal Court in Santiago until 1607. This establishment reaffirmed the city's role as capital.
Colonial Santiago
Although early Santiago appeared to be in imminent danger of permanent destruction, threatened by the Indian attack, earthquakes, and a series of floods, the city began to grow quickly. Of the 126 blocks designed by Gamboa in 1558, forty were occupied, and in 1580, the first important buildings in the city began to rise, the start of construction highlighted with the placing of the stone of the first Cathedral in 1561 and the church of San Francisco in 1572 and the building of the church of San Francisco in 1572. Both of these constructions consisted on mainly adobe and stone. In addition to construction of important buildings, the city began to develop as nearby lands welcomed tens of thousands of livestock.
A series of disasters impeded the development of the city during the sixteenth and seventeenth centuries: a earthquake, a 1575 smallpox epidemic, in 1590, 1608, and 1618, the Mapocho River floods, and, finally, the earthquake of May 13, 1647, which killed over 600 people and affected more than five thousand victims. However, these disasters would not stop the growth of the capital of the Captaincy General of Chile at a time when all the power of the country was centered on the Plaza de Armas santiaguina.
In 1767, the corregidor Luis Manuel de Zañartu, launched one of the most important architectural works of the entire colonial period, Calicanto Bridge, effectively allowing the city to join La Chimba to north of the river, and began the construction of embankments to prevent overflows of the Mapocho River. Although the bridge was able to be built, the stems were constantly destroyed by the river. In 1780, Governor Agustín de Jáuregui hired the Italian architect Joaquín Toesca, who would design, among other important works, the façade of the cathedral, the Palacio de La Moneda, the canal San Carlos, and the final construction of the embankments during the government of Ambrosio O'Higgins. These important works were opened permanently in 1798. The O'Higgins government also oversaw the opening of the road to Valparaíso in 1791, which connected the capital with the country's main port.
Capital of the Republic
Two new earthquakes hit the city, one on November 19, 1822, and another on February 20, 1835. These two events, however, did not prevent the city's rapid, continued growth. In 1820, the city reported 46,000 inhabitants, while in 1854, the population count reached 69,018. In 1865, the census reported 115,337 inhabitants. This significant increase was the result of suburb growth to the south and west of the capital, and in part to La Chimba, a vibrant district growing from the division of old properties that existed in the area. This new peripheral development led to the end of the traditional checkerboard structure that previously governed the city center.
19th century
The Santiago of Centenary
With the advent of the new century, the city began to experience various changes related to the strong development of industry. Valparaíso, which had hitherto been the economic center of the country slowly begins to lose prominence at the expense of capital. Already in 1895, 75% of the national manufacturing industry was in the capital and only 28% in the harbor, and by 1910, major banks and shops were set up in the streets of the city center, leaving Valparaíso.
The enactment of Both the law and the decree Autonomous Municipality building permit municipalities to create various administrative divisions in the Department of Santiago, in order to improve local governance. Maipú, Ñuñoa, Renca, Lampa and Colina would be created in 1891, Providencia and Barrancas in 1897; and in 1901, Las Condes. In the department of La Victoria, would originate Lo Cañas in 1891, which would be divided into La Granja and Puente Alto in 1892 born in 1899 and La Florida in 1925, La Cisterna.
The San Cristobal hill in this period began a long process of improvement. In 1903 an astronomical observatory was installed and the following year the first stone was placed Marian shrine at its summit, which is characterized by 14-meter image of the Virgin Mary, visible from various points of city. However, reforestarlo the idea would not be fulfilled until some decades later.
With the desire to celebrate the centenary of the Republic in 1910, many urban works were performed. It was extended railway network, allowing connection of the city with its nascent suburbs by rail ring and wearing the Cajon del Maipo , while a new railway station was built in the north of the city: the Mapocho Station. In the land reclaimed by channeling Mapocho, the Parque Forestal was created and new buildings of the Museum of Fine Arts, the National Internship and the National Library were opened. In addition, the work would be completed sewer, covering about 85% of the urban population.
Population explosion
The Greater Santiago
Relative growth of Santiago, by communes[2]
La Granja10026413793424
Las Condes1001975061083
San Miguel100221373488
This growth took place without any regulation and only began to be implemented during the 1960s with the creation of various development plans of Greater Santiago, a concept that reflected the new reality of a much larger city. In 1958 he was released on intercommunal Plan of Santiago and proposing the organization of urban areas, setting a limit of 38 600 urban and semi hectares for a maximum population of 3,260,000 inhabitants, the construction of new avenues, like the Américo Vespucio Avenue and Panamericana route 5, the expansion of existing and the establishment of 'industrial belts'. The celebration of the World Cup in 1962 gave new impetus to the improvement works of the city. In 1966 the Santiago Metropolitan Park was established in the Cerro San Cristóbal and MINVU began eradicating shanty towns and the construction of new homes and the San Borja, near which was built the Edificio Diego Portales.
In 1967 he opened the new International Airport Pudahuel and, after years of discussion, in 1969 started the construction of the Santiago Metro would, the first phase would run beneath the western section of the Alameda and would be inaugurated in 1975 The Metro would become one of the most prestigious buildings in the city and in the following years would continue to expand, reaching two perpendicular lines by the end of 1978 telecommunications also have an important development, reflected in the construction of the Torre Entel, which since its construction in 1975 would be one of the symbols of the capital to be the tallest structure in the country for two decades.
After the coup of 1973 and the establishment of the military regime, urban planning major changes did not start until the 1980s, when the government adopted a neoliberal economic model and the role of organizer passes the state to the market. In 1979 the master plan was amended, extending the urban area to more than 62 000 ha for real estate development, causing a new sprawl of the city, reaching the 40 619 ha in size in the early 1990s, especially in the Near La Florida, in the 1992 census became the country's most populous municipality with 328,881 inhabitants. Meanwhile, a strong earthquake struck the city on March 3, 1985, although it caused few casualties, left many homeless and destroying many old buildings.
The metropolis in the early twenty-first century
With the start of the transition in 1990, the city of Santiago and surpassed the four million inhabitants, preferably living in the south: La Florida was followed in population by Puente Alto and Maipú. The real estate development in these municipalities and others like Quilicura and Peñalolen largely due to the construction of housing projects for middle-class families. Meanwhile, high-income families moved into the foothills and called Barrio Alto, increasing the population of Las Condes and giving rise to new communes like Vitacura and Lo Barnechea. Moreover, although poverty began to drop significantly, there remained a strong dichotomy between the thriving global city and scattered city slums.
Providencia Avenue area was consolidated as an important commercial hub in the eastern sector and into the 1990s, this development was extended to the Barrio Alto which became an attractive location for the construction of high-rise buildings. Major companies and financial corporations were established in the area, giving rise to a thriving modern business center known as Sanhattan. The departure of these companies to Bairro Alto and the construction of shopping centers all around the city, creating a crisis in the city center, which had reinvented: its main shopping streets turned into pedestrian walkways, as the Paseo Ahumada, and instituted tax benefits for the construction of residential buildings, mainly attracting young adults.
In these years, the city began to face a series of problems generated by the messy experienced growth. Air pollution reached critical levels during the winter months and a layer of smog settled over the city, so the authorities should adopt legislative measures for industries and vehicle restrictions on cars. To this was added the vast expanse of the city brought down the transportation system. The Metro should be extended considerably extending its lines and creating three new lines between 1997 and 2006 in the southeastern sector, while a new extension to Maipú was inaugurated in 2011, leaving the metropolitan railway with a length of 105 km. In the case of buses, the system underwent a major reform in the early 1990s and then in 2007 with the establishment of a master plan known as Transantiago transport, which has faced a number of problems since its launch.
As we enter the twenty-first century, Santiago persists in its rapid development. Various urban highways have been built, the Civic District was renewed with the creation of the Plaza de la Ciudadanía and construction of the Ciudad Parque Bicentenario to commemorate the bicentenary of the Republic begins. The development of tall buildings continues in the eastern sector, which will culminate in the opening of skyscrapers Titanium La Portada and Gran Torre Santiago in real Costanera Center complex. However, socioeconomic inequality and fragmentation geosocial remain two of the most important problems, both city and country.
The February 27, 2010, a strong earthquake was felt in the capital, causing some damage to old buildings; however, some modern buildings are uninhabitable, generating much debate about the actual implementation of mandatory earthquake standards in the modern architecture of Santiago.
In the coming years the development of several new projects in many areas, especially in transport is expected. Reshaping the international airport by 2012 and expansion of rail services is expected, including several projects currently under evaluation as a network of trams in Las Condes, close to trains Lampa and Padre Hurtado (Melitrén) and a high-speed train that connects the capital to Valparaíso and Viña del Mar. Two new urban highways, Vespucci East and Central Costanera, are in the bidding process, while the Santiago Metro announced the construction of two new lines; 3 and 6 to this transformation would add parks on the banks of the Mapocho river, navigable become a flagship project of Sebastián Piñera who was President between 2010 and 2014.
Research Tips
This page uses content from the English Wikipedia. The original content was at Santiago, Chile. The list of authors can be seen in the page history. As with WeRelate, the content of Wikipedia is available under the Creative Commons Attribution/Share-Alike License. |
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Not to be confused with Pleistocene or Industrial plasticine.
Type Modelling clay
Inventor William Harbutt
Company Harbutt
Country United Kingdom
Availability 1900–
A Plasticine model of a rat, by Polish animator Monika Kuczyniecka
Plasticine, a brand of modelling clay, is a putty-like modelling material made from calcium salts, petroleum jelly and aliphatic acids. The name is a registered trademark of Flair Leisure Products plc. Plasticine is used extensively for children's play, but also as a modelling medium for more formal or permanent structures. Because of its non-drying property, it is a popular choice of material for stop-motion animation (including several Oscar-winning films by Nick Park). The brand-name clay is mentioned in music, such as the "plasticine porters" in "Lucy in the Sky with Diamonds" and the song Plasticine by Placebo.
Plasticine was formulated by art teacher William Harbutt of Bathampton, in Bath, England, in 1897. He wanted a non-drying clay for his sculpture students. Although the exact composition is a secret, Plasticine is composed of approximately 65% bulking agent, (principally gypsum), 10% petroleum jelly, 5% lime and 10% lanolin and stearic acid.[1] It is non-toxic, sterile, soft, and malleable, and does not dry on exposure to air (unlike superficially similar products such as Play-Doh, which is based on flour, salt and water). It cannot be hardened by firing; it melts when exposed to heat, and is flammable at much higher temperatures.[citation needed]
A patent was awarded in 1899, and in 1900 commercial production started at a factory in Bathampton. A different formulation was patented in 1915.[2] This had lambs wool or cotton wool mixed into the basic plasticine to give a stronger, fibrous composition intended for ear plugs and as a sterile dressing for wounds and burns.[3] The original Plasticine was grey, but the product initially sold to the public came in four colours. It was soon available in a wide variety of bright colours. Plasticine was popular with children, was widely used in schools for teaching art, and has found a wide variety of other uses (for example moulding casts for plaster, and plastics). The Harbutt company promoted Plasticine as a children's toy by producing modelling kits in association with companies responsible for popular children's characters such as Noddy, the Mr. Men and Paddington Bear.
The original Plasticine factory was destroyed by fire in 1963 and replaced by a modern building. The Harbutt company continued to produce Plasticine in Bathampton until 1983, when production was moved to Thailand.
The Colorforms company was the major American licensee of Plasticine from 1979 until at least 1984. Apparently, the use of a different chalk compound caused a product inconsistency, and the US version was considered inferior to the original mix.
From 1983 to 2006, the brand went through a number of ownership changes and was off the market for a long time. Plasticine was owned by Bluebird Toys plc following its acquisition of Harbutt's parent company, Peter Pan. Then, following Bluebird's takeover by Mattel in 1998, the brand was sold on to Humbrol Ltd, famous for its model paints and owner of the Airfix model kit brand. In 2005, Flair Leisure licensed the brand from Humbrol and relaunched Plasticine. A year later, when Humbrol went into administration, Flair bought the Plasticine brand outright.
Similar products[edit]
A similar product, "Kunst-Modellierthon" (known as Plastilin), was invented by Franz Kolb of Munich, Germany in 1880. This product is still available, known as "Münchner Künstler Plastilin" (Munich artists' Plastilin). In Italy, the product Pongo is also marketed as "plastilina" and shares the main attributes of Plasticine.
A life-size vegetable plot in James May's Paradise in Plasticine
Plasticine-like clays are also used in commercial party games such as Cranium[citation needed], Rapidough and Barbarossa[citation needed].
Television presenter James May together with Chris Collins, Jane McAdam Freud, Julian Fullalove and around 2000 members of the public created a show garden made entirely of Plasticine for the 2009 Chelsea Flower Show. Called 'Paradise in Plasticine', it took 6 weeks and 2.6 tons of Plasticine in 24 colours to complete. May said, "This is, to our knowledge, the largest and most complex model of this type ever created." It couldn't be considered as part of the standard judging criteria as it contained no real plants, but was awarded an honorary gold award made from Plasticine.[5][6] The garden was extremely popular with the public and went on to win the Royal Horticultural Society’s 'peoples choice' for best small garden.[7]
During World War II, Plasticine was used by bomb disposal officer Major John P. Hudson R.E. as part of the defuzing[8] process for the new German "Type Y" battery-powered bomb fuze. The "Type Y" fuze has an anti-disturbance device that had to be disabled before the fuze could be removed.[9][10][unreliable source?] Plasticine was used to build a dam around the head of the fuze to hold some liquid oxygen. The liquid oxygen cooled the battery down to a temperature at which it would no longer function; with the battery out of commission, the fuze could be removed safely.[11][12]
When an engine is being tuned for higher performance, different pistons and higher lift camshafts are often installed. This creates a risk that the valves might strike the piston, causing serious damage. Valve-to-piston clearance can be measured by placing a piece of Plasticine on top of a piston, replacing the cylinder head, and rotating the engine manually through a full cycle. When the cylinder head is removed, the valves will have made an impression in the Plasticine. The thickness of this impression is measured to give the valve to piston clearance.[13]
See also[edit]
1. ^ May, James (2009). Toy Stories. London: Conway. p. 16. ISBN 9781844861071.
2. ^ espacenet citation of 1915 Harbutt patent
3. ^ May, James (2009). Toy Stories. London: Conway. p. 25. ISBN 9781844861071.
4. ^ International Association of Athletics Federations (IAAF). "Competition Rules 2016-2017, Rule 184.3" (PDF). pp. 208–210.
5. ^ "RHS Chelsea Flower Show 2009: Paradise in Plasticine". BBC One. BBC. Retrieved 20 May 2009.
6. ^ Elliott, Valerie (20 May 2009). "Top Gear plasticine garden takes 'gold' at RHS Chelsea Flower Show". The Times. Retrieved 20 May 2009.
7. ^ James May: Paradise in Plasticine
8. ^ Jappy (2001), p. rear cover "these bombs were to be defuzed 'regardless of the loss of life to bomb disposal personnel'."
9. ^ TM 9-1985-2 (1953), p.182-185
10. ^ Dunstable Town Centre (20 April 2005). "The Earl and the Secretary". BBC. A3924443. Retrieved 13 September 2015. The "/Y" fuse behaved exactly like the normal one when tested, but it had an additional circuit that was isolated after activation. This circuit contained mercury tilt switches which would detonate the bomb if the fuse were turned, even slowly. This was a booby trap designed to kill bomb disposal personnel
11. ^ Hogben, Arthur (1987). Designed to Kill. London: Patrick Stephens Limited. pp. 131–133. ISBN 0-85059-865-6. It was believed that by using liquid oxygen poured over the fuze head the necessary very low temperature within the fuze could be achieved.
12. ^ Jappy, M. J. (2001). Danger UXB. London: MacMillan Publishers. pp. 150–153. ISBN 0-752-21576-0. That was wonderful when we got a bomb with the fuze lying at the top but if the fuze was at the side, it wasn't quite so easy. [...] I think it must have been me who thought of the idea of making a little neck of clay around the side to hold the liquid. I think I used plasticine actually.
13. ^ Car Craft: How To Check Valve-To-Piston Clearance
External links[edit] |
Analysis of Green Chemistry publications over the past four years.
This figure is taken from Green chemistry: state of the art through an analysis of the literature by V. Dichiarante, D. Ravelli and A. Albini. Green Chemistry Letter and Reviews Vol. 3, No. 2, June 2010, 105-113.
As the label indicates, the pie chart shows a distribution of green chemistry topics as analyzed by articles produced in the year 2008. The majority of the pie chart (about 50%) is attributed to catalysis – or starting a reaction, under more favorable conditions that require less resources, whether those resources are heat, energy, reagents etc. Specifically, metal catalysts were the most cited catalysts used in many different reactions, specifically in those involving enzymes. Acids are also seen in this category, and according to the article, are used mainly in condensation reactions. The next largest section of the pie (about 40%) is attributed to media, or where/in what the reaction takes place. Many reactions require some liquid for a reaction to take place. Many of these liquids, especially in organic chemistry, are volatile or toxic compounds. As a result, most of the research done with green chemistry and the media of reactions use either no solvent, which allows for most reduction of waste. Water has also gained a prominent role in green chemistry literature as it is our universal solvent and usually can be recycled in a reaction. Ionic liquids are the third major media hit; they are liquids that have charged compounds in the solution to help guide a reaction. Ionic liquids are usually not volatile and are stored more easily compared to their organic counterparts. Finally, the last 10% of the pie chart goes to ‘new methods,’ or novel ways to do old reactions. Using microwaves to start and maintain a reaction is the most prominent method, followed by some research advances in photochemistry and ultrasounds, using light or sound respectively in reactions.
Green Chemistry Research Publications are increasing in number, though the overall body of literature is still small.
Data show an increase in publication of papers about green chemistry and increased citations of these articles over the past five years. Many papers are authored by scientists in non-US countries; China is now the second most prolific source after the US of both chemistry and Green Chemistry articles.
The apparent increase in the number of hits on ‘green chemistry’ was seen in many different fields. The graph shows this increase in the medical databases (PubMed), the research databases (ISI Web of Knowledge), and even the government databases (Science Gov). The prevalence and importance of green chemistry research has been booming over the past five years, and is expected to increase even more in the next five years.
Compared to any other field in chemistry, funding for Green Chemistry is shockingly low.
NSF Bubble graph
Postdoctoral positions are an increasingly necessary step in an academic chemist’s career; often postdocs conduct the most cutting-edge research. The overall proportion of science PhDs who held a postdoctoral position grew from 41% in 1973 to 61% in 2005; in 2006 8% of these were in chemistry .
22,900 U.S. citizens and permanent residents were in academic postdoc positions in the fall of 2005 (SDR in Indicators); of this some 4,200 were in chemistry. An additional 26,600 postdocs were awarded in all fields that same year to US students with temporary visas; 2,750 of these were in chemistry (GSS in Indicators). However, there are few postdoc positions dedicated to Green Chemistry in the US.
What US postdocs exist in Green Chemistry are not funded with government support; the source of this funding matters because it profoundly affects the direction in which research trends.
However, there is funding to be found from the EPA & NSF. Click here for a list of opportunities!
Increase in the impact factor of the Green Chemistry Journal.
Impact factors allow you to see the average number of citations per paper, potentially showing the significance and advancements of the journal. Amongst the RSC Chemistry journals, Chemical Society Reviews has the highest impact factor of 24.6. The average for these chemistry journals is 2.4. Green Chemistry recently received an impact factor of 6.056, a value 300% greater than the average. Specific journals such as Green Chemistry are predicted to recieve lower impact factors due to the specificity of the field in comparison to a holistic chemistry journal. Green Chemistry’s higher impact factor shows its increasing prevalence and salience in the chemical world.
Web of Science. C2011.
National Research Council, Committee on Benchmarking the Research Competitiveness of the US in Chemistry. The future of US chemistry research. Washington (DC): National Academies Press. 2007.
National Science Board. Science and engineering indicators 2008: Volume One
Research Associateship Programs. C2007. Washington (DC): National Research Council.
RSC Publishing Blog. 2011.
three beakers |
Skip to main content
'''Henry Eugene Davies''' (July 2, 1836 – September 7, 1894) was an United States|American soldier, writer, public official and lawyer. He served in the Union Army as a Brigadier general (United States)|brigadier general of volunteers in cavalry service during the American Civil War ("Civil War") and was promoted to the grade of Major general (United States)|major general of volunteers at the end of the war. Davies was one of the few nonprofessional soldiers in the Union cavalry in the East to be promoted to the grade of general. He led his brigade in several major battles, especially during the Overland Campaign, the Battle of Trevilian Station, the Siege of Petersburg and the Appomattox Campaign at the end of the war.
share Share
favorite Favorite
up-solid down-solid |
July- Mason Bees are still flying.
Bumble bees on summer flowers
Summer flowers
Bumble bee- Bombus vosnechenskii
Summer flowers
What is so neat about these bees flying into July is that they are flying and nesting in a nectar and pollen rich time period. As a beekeeper we know that spring is a time when nectar and pollen is abundant for nesting. This period is followed June, which is usually a dearth period. In June, food in the form of pollen and nectar is scarce. Early spring flowers have finished blooming and summer flowers are still developing. Therefore, June is normally a very difficult period for mason bees to survive and not starve since they do not store honey( unlike the honey bees). Thus, surviving through to July is quite the miracle! the surviving mason bees are now again in a bountiful period, when blackberry, fireweed and other summer flowers produce lots of pollen and nectar.
( 41) |
Rapture-in-the-CloudsThe Greek word for Rapture is found in 1 Thessalonians 4:17, in verb form ἁρπαγησόμεθα (harpagisometha), which means “we shall be caught up” or “taken away”
The first biblical rapture event took place in the Book of Genesis
Genesis 5:24 “And Enoch walked with God; and he was not, for God took him”.
After God raptured Enoc in the Book of Genesis, the world was judged in a devastating flood that killed all of humanity.
King Solomon wrote about the Rapture of the Church in the Old Testament, comparing the rebirth of Israel to a Fig Tree.
The voice of my beloved! Behold, he comes leaping upon the mountains, skipping upon the hills. My beloved is like a gazelle or a young stag. Behold, he stands behind our wall; He is looking through the windows, gazing through the lattice. My beloved spoke, and said to me: “Rise up, my love, my fair one, and come away. For lo, the winter is past, the rain is over and gone. The flowers appear on the earth; the time of singing has come, and the voice of the turtledove is heard in our land. The fig tree puts forth her green figs, and the vines with the tender grapes give a good smell. Rise up, my love, my fair one, and come away! Song of Solomon 2:8-13
Luke 17:26-30 26 And as it was in the days of Noah, so it will be also in the days of the Son of Man: 27 They ate, they drank, they married wives, they were given in marriage, until the day that Noah entered the ark, and the flood came and destroyed them all. 28 Likewise as it was also in the days of Lot: They ate, they drank, they bought, they sold, they planted, they built; 29 but on the day that Lot went out of Sodom it rained fire and brimstone from heaven and destroyed them all. 30 Even so will it be in the day when the Son of Man is revealed.
Jesus warned that a Rapture of the Christian Church would occur before the Judgement of the world.
Here is a list of Rapture scriptures that occur though out the New Testament. Please read them prayerfully.
1 Corinthians 15:51-52 51 Behold, I tell you a mystery: We shall not all sleep, but we shall all be changed. 52 in a moment, in the twinkling of an eye, at the last trumpet. For the trumpet will sound, and the dead will be raised incorruptible, and we shall be changed.
Can you donate $20.00 bucks to help pay for bandwidth ?
Questions or Comments ? Contact us by email Webmaster@RedMoonRapture.com
Leave a Reply
|
What is volatile memory? - Definition from WhatIs.com
Part of the Storage hardware glossary:
Volatile memory is computer storage that only maintains its data while the device is powered.
Most RAM (random access memory) used for primary storage in personal computers is volatile memory. RAM is much faster to read from and write to than the other kinds of storage in a computer, such as the hard disk or removable media. However, the data in RAM stays there only while the computer is running; when the computer is shut off, RAM loses its data.
Volatile memory contrasts with non-volatile memory, which does not lose content when power is lost. Non-volatile memory has a continuous source of power and does not need to have its memory content periodically refreshed.
This was last updated in August 2014
Contributor(s): Ivy Wigmore
Posted by: Margaret Rouse
Related Terms
• Storage hardware
• Internet applications
Tech TalkComment
Contribute to the conversation
|
Definition of nuclear age in English:
Share this entry
nuclear age
Entry from US English dictionary
The period in history usually considered to have begun with the first use of the atomic bomb (1945). It is characterized by nuclear energy as a military, industrial, and sociopolitical factor. Also called atomic age.
Example sentences
• The museum sketches the history of the nuclear age, which started with the first atomic bomb test in the New Mexico desert in 1945.
• These three factors are the reasons behind the United States dropping the atomic bomb on Japan, as they unknowingly and unintentionally began the nuclear age and the Cold War.
• For a hundred years of war, culminating in the nuclear age, military technology was designed and deployed to inflict casualties on an ever-growing scale.
For editors and proofreaders
Syllabification: nu·cle·ar age
Share this entry
What do you find interesting about this word or phrase?
|
Boston Vigilance Committee
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Theodore Parker led the Boston Vigilance Committee.
An April 24, 1851 poster warning the "colored people of Boston" about policemen acting as slave catchers, pursuant to the Fugitive Slave Law of 1850
Boston Vigilance Committee was an abolitionist organization formed in Boston, Massachusetts on June 4, 1841 at the Marlboro Chapel, Hall No. 3.[1] with the mission of protecting fugitive slaves from being kidnapped and returned to their Southern owners in accordance with the Fugitive Slave Law of 1850. The organization was led by Theodore Parker, an American Transcendentalist and reforming minister of the Unitarian church. Parker is known to history as a member of the Secret Six, an abolitionist group which supported John Brown.
A vigilance committee, in the 19th century United States, was a group of private citizens who organized themselves for self-protection. The committees were established in areas where there was no local law enforcement, or where the local government was ineffectual, corrupt, or unpopular. The groups, despite generally held opinions, were not mobs of unorganized individuals bent on revenge of the moment, but usually well-organized, with charters defining their purposes and official membership lists.[citation needed]
Some were public, but many were secret. Secrecy prevented retaliation by lawless or corrupt organizations and also made it difficult for government officials to pursue criminal charges in areas where the government held jurisdiction. Vigilance committees are not unique to the United States and existed into the 20th century.[citation needed]
Fugitive Slave Law[edit]
Before the Civil War, Fugitive Slave Laws authorized slave hunters to pursue run-away slaves into non-slave states.[2] All through the North,[citation needed] vigilance committees opposed to slavery provided fugitive slaves protection,[2] food, clothing, and temporary shelter.[citation needed] They also assisted run-aways in using the Underground railroad[2] toward Canada, which did not recognize the Fugitive Slave Act.[citation needed]
The Fugitive Slave Act of 1793 was a Federal law which enforced a section of the United States Constitution that required the return of runaway slaves. It sought to force the authorities in free states to return fugitive slaves to their masters.[2]
State's response to the act[edit]
Some Northern states passed "personal liberty laws", mandating a jury trial before alleged fugitive slaves could be moved. Otherwise, they feared free blacks (who could vote in ten of the 13 states at the time of the adoption of the Constitution) could be kidnapped into slavery. Other states forbade the use of local jails or the assistance of state officials in the arrest or return of such fugitives. In some cases, juries simply refused to convict individuals who had been indicted under the Federal law. Moreover, locals in some areas actively fought attempts to seize fugitives and return them to the South.[citation needed]
The Missouri State Supreme Court routinely held that transportation of slaves into free states automatically made them free. The U.S. Supreme Court ruled, in Prigg v. Pennsylvania (1842), that states did not have to proffer aid in the hunting or recapture of slaves, greatly weakening the law of 1793.[citation needed]
In the response to the weakening of the original fugitive slave act, the Fugitive Slave Law of 1850 made any Federal marshal or other official who did not arrest an alleged runaway slave liable to a fine of $1,000. Law-enforcement officials everywhere now had a duty to arrest anyone suspected of being a runaway slave on no more evidence than a claimant's sworn testimony of ownership.[citation needed] The suspected slave could not ask for a jury trial or testify on his or her own behalf. In addition, any person aiding a runaway slave by providing food or shelter was subject to six months' imprisonment and a $1,000 fine.[2] Officers who captured a fugitive slave were entitled to a fee for their work.[citation needed]
With this new law, many Northern citizens of the United States felt compromised in their views on slavery. Whereas the Underground Railroad had previously supported fugitive slaves escape from Southern slaveholding states to freedom in Northern states, the Fugitive Slave Law left those who opposed slavery with the immediate choice of either defying what they believed was an unjust law, or breaking with their beliefs.[citation needed]
Founding of the Boston Vigilance Committee[edit]
In response to the law, the Boston Vigilance Committee began advocating resistance to the enforcement of the Fugitive Slave Law through a variety of means. Members contributed funds to send fugitive slaves on to Canada, and for giving shelter and hiding to slaves making their way to Boston. When slaves were captured, the Committee paid for legal fees, and provided money which allowed escaped slaves to purchase necessities.[3]
Shadrach Minkins[edit]
Additionally the Committee also carried out more violent resistance. In 1851, members stormed Federal marshals and freed Shadrach Minkins, a slave who escaped from Virginia, and who had been captured in Boston.[4]
Associates of the group included Francis Jackson.[5]
Anthony Burns[edit]
A portrait of the fugitive slave Anthony Burns, whose arrest and trial under the Fugitive Slave Act of 1850 touched off riots and protests by abolitionists and citizens of Boston in the spring of 1854.
The case of escaped slave Anthony Burns fell under this statute. His arrest in Boston, and Judge Edward G. Loring's decision to order him back into slavery in Virginia, outraged abolitionists and many ordinary Bostonians, who were increasingly hostile towards the enforcement of the Fugitive Slave Law of 1850. Abolitionist planned to free Burns from prison and spirit him to safety were frustrated when President Pierce deployed federal artillery and Marines to take Burns to the ship back to Virginia.[citation needed]
See also[edit]
1. ^ William Cooper Nell (2002). William Cooper Nell, Nineteenth-Century African American Abolitionist, Historian, Integrationist: Selected Writings from 1832-1874. Black Classic Press. p. 99. ISBN 978-1-57478-019-2. Retrieved 23 April 2013.
2. ^ a b c d e Fugitive Slave Law. Massachusetts Historical Society. Retrieved April 23, 2013.
3. ^ "Black History: Fugitives". Retrieved April 23, 2013.
4. ^ The Ordeal of Shadrach Minkins. Massachusetts Historical Society. Retrieved April 23, 2013.
5. ^ Life and correspondence of Theodore Parker. 1864
Further reading[edit] |
Chapter 9. Functional Programming
Collapse the table of content
Expand the table of content
This documentation is archived and is not being maintained.
Functional Programming
This chapter is excerpted from Programming Visual Basic 2008: Build .NET 3.5 Applications with Microsoft's RAD Tool for Business by Tim Patrick, published by O'Reilly Media
In this chapter, we'll cover two major Visual Basic programming topics: lambda expressions and error handling. Both are mysterious, one because it uses a Greek letter in its name and the other because it might as well be in Greek for all the difficulty programmers have with it. Lambda expressions in particular have to do with the broader concept of functional programming, the idea that every computing task can be expressed as a function and that functions can be passed around willy-nilly within the source code. Visual Basic is not a true functional programming language, but the introduction of lambda expressions in Visual Basic 2008 brings some of those functional ways and means to the language.
Lambda Expressions
Lambda expressions are named for lambda calculus (or λ-calculus), a mathematical system designed in the 1930s by Alonzo Church, certainly a household name between the wars. Although his work was highly theoretical, it led to features and structures that benefit most programming languages today. Specifically, lambda calculus provides the rationale for the Visual Basic functions, arguments, and return values that we've already learned about. So, why add a new feature to Visual Basic and call it "lambda" when there are lambda things already in the language? Great question. No answer.
Lambda expressions let you define an object that contains an entire function. Although this is something new in Visual Basic, a similar feature has existed in the BASIC language for a long time. I found an old manual from the very first programming language I used, BASIC PLUS on the RSTS/E timeshare computer. It provided a sample of the DEF statement, which let you define simple functions. Here is some sample code from that language that prints a list of the first five squares:
100 DEF SQR(X)=X*X
110 FOR I=1 TO 5
120 PRINT I, SQR(I)
130 NEXT I
140 END
The function definition for SQR( ) appears on line 100, returning the square of any argument passed to it. It's used in the second half of line 120, generating the following output:
1 1
2 4
3 9
4 16
5 25
Lambda expressions in Visual Basic work in a similar way, letting you define a variable as a simple function. Here's the Visual Basic equivalent for the preceding code:
Dim sqr As Func(Of Integer, Integer) = _
Function(x As Integer) x * x
For i As Integer = 1 To 5
Console.WriteLine("{0}{1}{2}", i, vbTab, sqr(i))
Next i
The actual lambda expression is on the second line:
Function(x As Integer) x * x
Lambda expressions begin with the Function keyword, followed by a list of passed-in arguments in parentheses. After that comes the definition of the function itself, an expression that uses the passed-in arguments to generate some final result. In this case, the result is the value of x multiplied by itself.
One thing you won't see in a lambda expression is the Return statement. Instead, the return value just seems to fall out of the expression naturally. That's why you need some sort of variable to hold the definition and return the result in a function-like syntax.
Dim sqr As Func(Of Integer, Integer)
Lambda expression variables are defined using the Func keyword-so original. The data type argument list matches the argument list of the actual lambda expression, but with an extra data type thrown in at the end that represents the return value's data type. Here's a lambda expression that checks whether an Integer argument is even or not, returning a Boolean result:
Public Sub TestNumber( )
Dim IsEven As Func(Of Integer, Boolean) = _
Function(x As Integer) (x Mod 2) = 0
MsgBox("Is 5 Even? " & IsEven(5))
End Sub
This code displays a message that says, "Is 5 Even? False." Behind the scenes, Visual Basic is generating an actual function, and linking it up to the variable using a delegate. (A delegate, as you probably remember, is a way to identify a method generically through a distinct variable.) The following code is along the lines of what the compiler is actually generating for the previous code sample:
Private Function HiddenFunction1( _
ByVal x As Integer) As Boolean
Return (x Mod 2) = 0
End Function
Private Delegate Function HiddenDelegate1( _
ByVal x As Integer) As Boolean
Public Sub TestNumber( )
Dim IsEven As HiddenDelegate1 = _
AddressOf HiddenFunction1
End Sub
In this code, the lambda expression and related IsEven variable have been replaced with a true function (HiddenFunction1) and a go-between delegate (HiddenDelegate1). Although lambdas are new in Visual Basic 2008, this type of equivalent functionality has been available since the first release of Visual Basic for .NET. Lambda expressions provide a simpler syntax when the delegate-referenced function is just returning a result from an expression.
Lambda expressions were added to Visual Basic 2008 primarily to support the new LINQ functionality (see Chapter 17, LINQ). They are especially useful when you need to supply an expression as a processing rule for some other code, especially code written by a third party. And in your own applications, Microsoft is a third party. Coincidence? I think not!
Implying Lambdas
Lambda expressions are good and all, but it's clear that equivalent functionality was already available in the language. And by themselves, lambda expressions are just a simplification of some messy function-delegate syntax. But when you combine lambda expressions with the type inference features discussed back in Chapter 6, Data and Data Types, you get something even better: pizza!
Perhaps I should have written this chapter after lunch. What you get is lambda expressions with inferred types. It's not a very romantic name, but it is a great new tool.
Let's say that you wanted to write a lambda expression that multiplies two numbers together.
Dim mult As Func(Of Integer, Integer, Integer) = _
Function(x As Integer, y As Integer) x * y
MsgBox(mult(5, 6)) ' Displays 30
This is the Big Cheese version of the code: I tell Visual Basic everything, and it obeys me without wavering. But there's also a more laissez faire version of the code that brings type inference into play.
Dim mult = Function(x As Integer, y As Integer) x * y
Hey, that's a lot less code. I was getting pretty tired of typing Integer over and over again anyway. The code works because Visual Basic looked at what you assigned to mult and correctly identified its strong data type. In this case, mult is of type Function(Integer, Integer) As Integer (see Figure 9.1, "Visual Basic is also good at playing 20 questions"). It even correctly guessed the return type.
Figure 9.1. Visual Basic is also good at playing 20 questions
Visual Basic is also good at playing 20 questions
This code assumes that you have Option Infer set to On in your source code, or through the Project properties (it's the default). Chapter 6, Data and Data Types discusses this option.
We could have shortened the mult definition up even more.
Dim mult = Function(x, y) x * y
In this line, Visual Basic would infer the same function, but it would use the Object data type throughout instead of Integer. Also, if you have Option Strict set to On (which you should), this line will not compile until you add the appropriate As clauses.
Expression Trees
Internally, the Visual Basic compiler changes a lambda expression into an "expression tree," a hierarchical structure that associates operands with their operators. Consider this semicomplex lambda expression that raises a multiplied expression to a power:
Dim calculateIt = Function(x, y, z) (x * y) ^ z
Visual Basic generates an expression tree for calculateIt that looks like Figure 9.2, "Expression trees group operands by operator".
Figure 9.2. Expression trees group operands by operator
Expression trees group operands by operator
When it comes time to use a lambda expression, Visual Basic traverses the tree, calculating values from the lower levels up to the top. These expression trees are stored as objects based on classes in the System.Linq.Expressions namespace. If you don't like typing lambda expressions, you can build up your own expression trees using these objects. However, my stomach is rumbling even more, so I'm going to leave that out of the book.
Complex Lambdas
Although lambda expressions can't contain Visual Basic statements such as For...Next loops, you can still build up some pretty complex calculations using standard operators. Calls out to other functions can also appear in lambdas. In this code sample, mult defers its work to the MultiplyIt function:
Private Sub DoSomeMultiplication( )
Dim mult = Function(x As Integer, y As Integer) _
MultiplyIt(x, y) + 10
MsgBox(mult(5, 6)) ' Displays 40
End Sub
Public Function MultiplyIt(ByVal x As Integer, _
ByVal y As Integer) As Integer
Return x * y
End Function
That's pretty straightforward. But things get more interesting when you have lambda expressions that return other lambda expressions. Lambda calculus was invented partially to see how any complex function could be broken down into the most basic of functions. Even literal values can be defined as lambdas. Here's the lambda expression that always returns the value 3:
Dim three = Function( ) 3
You've already seen lambda expressions that accept more than one argument:
Dim mult1 = Function(x As Integer, y As Integer) x * y
In lambda calculus, this can be broken down into smaller functionettes, where each includes only a single argument:
Dim mult2 = Function(x As Integer) Function(y As Integer) x * y
The data type of mult2 is not exactly the same as mult1's data type, but they both generate the same answer from the same x and y values. When you use mult1, it calculates the product of x and y and returns it. When you use mult2, it first runs the Function(x As Integer) part, which returns another lambda calculated by passing the value of x into its definition. If you pass in "5" as the value for x, the returned lambda is:
Function(y As Integer) 5 * y
This lambda is then calculated, and the product of 5 and y is returned. Calling mult2 in code is also slightly different. You don't pass in both arguments at once. Instead, you pass in the argument for x, and then pass y to the returned initial lambda.
When run, the mult2(5) part gets replaced with the first returned lambda. Then that first returned lambda is processed using (6) as its y argument. Isn't that simple? Well, no, it isn't. And that's OK, since the two-argument mult1 works just fine. The important part to remember is that it's possible to build complex lambda expressions up from more basic lambda expressions. Visual Basic will use this fact when it generates the code for your LINQ-related expressions. We'll talk more about it in Chapter 17, LINQ, but even then, Visual Basic will manage a lot of the LINQ-focused lambda expressions for you behind the scenes.
Variable Lifting
Although you can pass arguments into a lambda expression, you may also use other variables that are within the scope of the lambda expression.
Private Sub NameMyChild( )
Dim nameLogic = GetChildNamingLogic( )
MsgBox(nameLogic("John")) ' Displays: Johnson
End Sub
Private Function GetChildNamingLogic( ) As _
Func(Of String, String)
Dim nameSuffix As String = "son"
Dim newLogic = Function(baseName As String) _
baseName & nameSuffix
Return newLogic
End Function
The GetChildNamingLogic function returns a lambda expression. That lambda expression is used in the NameMyChild method, passing John as an argument to the lambda. And it works. The question is how. The problem is that nameSuffix, used in the lambda expression's logic, is a local variable within the GetChildNamingLogic method. All local variables are destroyed whenever a method exits. By the time the MsgBox function is called, nameSuffix will be long gone. Yet the code works as though nameSuffix lived on.
To make this code work, Visual Basic uses a new feature called variable lifting. Seeing that nameSuffix will be accessed outside the scope of GetChildNamingLogic, Visual Basic rewrites your source code, changing nameSuffix from a local variable to a variable that has a wider scope.
In the new version of the source code, Visual Basic adds a closure class, a dynamically generated class that contains both the lambda expression and the local variables used by the expression. When you combine these together, any code that gets access to the lambda expression will also have access to the "local" variable.
Private Sub NameMyChild( )
Dim nameLogic = GetChildNamingLogic( )
End Sub
Public Class GeneratedClosureClass
Public nameSuffix As String = "son"
Public newLogic As Func(Of String, String) = _
Function(baseName As String) baseName & Me.nameSuffix
End Class
Private Function GetChildNamingLogic( ) As _
Func(Of String, String)
Dim localClosure As New GeneratedClosureClass
localClosure.nameSuffix = "son"
Return localClosure.newLogic
End Function
The actual code generated by Visual Basic is more complex than this, and it would include all of that function-delegate converted code I wrote about earlier. But this is the basic idea. Closure classes and variable lifting are essential features for lambda expressions since you can never really know where your lambda expressions are at all hours of the night.
To initialize object properties not managed by constructors, you need to assign those properties separately just after you create the class instance.
Dim newHire As New Employee
newHire.Name = "John Doe"
newHire.HireDate = #2/27/2008#
newHire.Salary = 50000@
The With...End With statement provides a little more structure.
Dim newHire As New Employee
With newHire
.Name = "John Doe"
.HireDate = #2/27/2008#
.Salary = 50000@
End With
A new syntax included in Visual Basic 2008 lets you combine declaration (with the New keyword) and member assignment. The syntax includes a new variation of the With statement.
Dim newHire As New Employee With { _
.Name = "John Doe", _
.HireDate = #2/27/2008#, _
.Salary = 50000@}
Well, as far as new features go, it's not glitzy like lambda expressions or variable lifting. But it gets the job done.
Debugging and error processing are two of the most essential programming activities you will ever perform. There are three absolutes in life: death, taxes, and software bugs. Even in a relatively bug-free application, there is every reason to believe that a user will just mess things up royally. As a programmer, your job is to be the guardian of the user's data as managed by the application, and to keep it safe, even from the user's own negligence (or malfeasance), and also from your own source code.
I recently spoke with a developer from a large software company headquartered in Redmond, Washington; you might know the company. This developer told me that in any given application developed by this company, more than 50% of the code is dedicated to dealing with errors, bad data, system exceptions, and failures. Certainly, all this additional code slows down each application and adds a lot of overhead to what is already called "bloatware." But in an age of hackers and data entry mistakes, such error management is an absolute must.
Testing-although not a topic covered in this book-goes hand in hand with error management. Often, the report of an error will lead to a bout of testing, but it should really be the other way around: testing should lead to the discovery of errors. A few years ago, NASA's Mars Global Surveyor, in orbit around the red planet, captured images of the Beagle 2, a land-based research craft that crashed into the Martian surface in 2003. An assessment of the Beagle 2's failure pinpointed many areas of concern, with a major issue being inadequate testing:
This led to an attenuated testing programme to meet the cost and schedule constraints, thus inevitably increasing technical risk. (From Beagle 2 ESA/UK Commission of Inquiry Report, April 5, 2004, Page 4)
Look at all those big words. Boy, the Europeans sure have a way with language. Perhaps a direct word-for-word translation into American English will make it clear what the commission was trying to convey:
They didn't test it enough, and probably goofed it all up.
You will deal with three major categories of errors in your Visual Basic applications:
Compile-time errors
Some errors are so blatant that Visual Basic will refuse to compile your application. Generally, such errors are due to simple syntax issues that can be corrected with a few keystrokes. But you can also enable features in your program that will increase the number of errors recognized by the compiler. For instance, if you set Option Strict to On in your application or source code files, implicit narrowing conversions will generate compile-time errors.
' ----- Assume: Option Strict On
Dim bigData As Long = 5&
Dim smallData As Integer
' ----- The next line will not compile.
smallData = bigData
Visual Studio 2008 includes features that help you locate and resolve compile-time errors. Such errors are marked with a "blue squiggle" below the offending syntax. Some errors also prompt Visual Studio to display corrective options through a pop-up window, as shown in Figure 9.3, "Error correction options for a narrowing conversion".
Figure 9.3. Error correction options for a narrowing conversion
Error correction options for a narrowing conversion
Runtime errors
Runtime errors occur when a combination of data and code causes an invalid condition in what otherwise appears to be valid code. Such errors frequently occur when a user enters incorrect data into the application, but your own code can also generate runtime errors. Adequate checking of all incoming data will greatly reduce this class of errors. Consider the following block of code:
Public Function GetNumber( ) As Integer
' ----- Prompt the user for a number.
' Return zero if the user clicks Cancel.
Dim useAmount As String
' ----- InputBox returns a string with whatever
' the user types in.
useAmount = InputBox("Enter number.")
If (IsNumeric(useAmount) = True) Then
' ----- Convert to an integer and return it.
Return CInt(useAmount)
' ----- Invalid data. Return zero.
Return 0
End If
End Function
This code looks pretty reasonable, and in most cases, it is. It prompts the user for a number, converts valid numbers to integer format, and returns the result. The IsNumeric function will weed out any invalid non-numeric entries. Calling this function will, in fact, return valid integers for entered numeric values, and 0 for invalid entries.
But what happens when a fascist dictator tries to use this code? As history has shown, a fascist dictator will enter a value such as "342304923940234." Because it's a valid number, it will pass the IsNumeric test with flying colors, but since it exceeds the size of the Integer data type, it will generate the dreaded runtime error shown in Figure 9.4, "An error message only a fascist dictator could love".
Figure 9.4. An error message only a fascist dictator could love
An error message only a fascist dictator could love
Without additional error-handling code or checks for valid data limits, the GetNumber routine generates this runtime error, and then causes the entire program to abort. Between committing war crimes and entering invalid numeric values, there seems to be no end to the evil that fascist dictators will do.
Logic errors
Logic errors are the third, and the most insidious, type of error. They are caused by you, the programmer; you can't blame the user on this one. From process-flow issues to incorrect calculations, logic errors are the bane of software development, and they result in more required debugging time than the other two types of errors combined.
Logic errors are too personal and too varied to directly address in this book. You can force many logic errors out of your code by adding sufficient checks for invalid data, and by adequately testing your application under a variety of conditions and circumstances.
You won't have that much difficulty dealing with compile-time errors. A general understanding of Visual Basic and .NET programming concepts, and regular use of the tools included with Visual Studio 2008, will help you quickly locate and eliminate them.
The bigger issue is: what do you do with runtime errors? Even if you check all possible data and external resource conditions, it's impossible to prevent all runtime errors. You never know when a network connection will suddenly go down, or the user will trip over the printer cable, or a scratch on a DVD will generate data corruption. Anytime you deal with resources that exist outside your source code, you are taking a chance that runtime errors will occur.
Figure 9.4, "An error message only a fascist dictator could love" showed you what Visual Basic does when it encounters a runtime error: it displays to the user a generic error dialog, and offers a chance to ignore the error (possible corruption of any unsaved data) or exit the program immediately (complete loss of any unsaved data).
Although both of these user actions leave much to the imagination, they don't instill consumer confidence in your coding skills. Trust me on this: the user will blame you for any errors generated by your application, even if the true problem was far removed from your code.
Fortunately, Visual Basic includes three tools to help you deal completely with runtime errors, if and when they occur. These three Visual Basic features-unstructured error handling, structured error handling, and unhandled error handling-can all be used in any Visual Basic application to protect the user's data-and the user-from unwanted errors.
Unstructured error handling has been a part of Visual Basic since it first debuted in the early 1990s. It's simple to use, catches all possible errors in a block of code, and can be enabled or disabled as needed. By default, methods and property procedures include no error handling at all, so you must add error-handling code-unstructured or structured-to every routine where you feel it is needed.
The idea behind unstructured error handling is pretty basic. You simply add a line in your code that says, "If any errors occur at all, temporarily jump down to this other section of my procedure where I have special code to deal with it." This "other section" is called the error handler.
Public Sub ErrorProneRoutine( )
' ----- Any code you put here before enabling the
' error handler should be pretty resistant to
' runtime errors.
' ----- Turn on the error handler.
On Error GoTo ErrorHandler
' ----- More code here with the risk of runtime errors.
' When all logic is complete, exit the routine.
' ----- When an error occurs, the code temporarily jumps
' down here, where you can deal with it. When you're
' finished, call this statement:
' ----- which will jump back to the code that caused
' the error. The "Resume" statement has a few
' variations available. If you don't want to go
' back to main code, but just want to get out of
' this routine as quickly as possible, call:
End Sub
The On Error statement enables or disables error handling in the routine. When an error occurs, Visual Basic places the details of that error in a global Err object. This object stores a text description of the error, the numeric error code of the error (if available), related online help details, and other error-specific values. I'll list the details a little later.
You can include as many On Error statements in your code as you want, and each one could direct errant code to a different label. You could have one error handler for network errors, one for file errors, one for calculation errors, and so on. Or you could have one big error handler that uses If...Then...Else statements to examine the error condition stored in the global Err object.
If (Err.Number = 5) Then
' ----- Handle error-code-5 issues here.
You can find specific error numbers for common errors in the online documentation for Visual Studio, but it is this dependence on hardcoded numbers that makes unstructured error handling less popular today than it was before .NET. Still, you are under no obligation to treat errors differently based on the type of error. As long as you can recover from error conditions reliably, it doesn't always matter what the cause of the error was. Many times, if I have enabled error handling where it's not the end of the world if the procedure reaches the end in an error-free matter, I simply report the error details to the user, and skip the errant line.
Public Sub DoSomeWork( )
On Error GoTo ErrorHandler
' ----- Logic code goes here.
MsgBox("An error occurred in 'DoTheWork':" & _
Resume Next
End Sub
This block of code reports the error, and then uses the Resume Next statement (a variation of the standard Resume statement) to return to the code line immediately following the one that caused the error. Another option uses Resume some_other_label, which returns control to some specific named area of the code.
Disabling Error Handling
Using On Error GoTo enables a specific error handler. Although you can use a second On Error GoTo statement to redirect errors to another error handler in your procedure, a maximum of one error handler can be in effect at any moment. Once you have enabled an error handler, it stays in effect until the procedure ends, you redirect errors to another handler, or you specifically turn off error handling in the routine. To take this last route, issue the following statement:
On Error GoTo 0
Ignoring Errors
Your error handler doesn't have to do anything special. Consider this error-handling block:
Resume Next
When an error occurs, this handler immediately returns control to the line just following the one that generated the error. Visual Basic includes a shortcut for this action.
On Error Resume Next
By issuing the On Error Resume Next statement, all errors will populate the Err object (as is done for all errors, no matter how they are handled), and then skip the line generating the error. The user will not be informed of the error, and will continue to use the application in an ignorance-is-bliss stupor.
Unstructured error handling was the only method of error handling available in Visual Basic before .NET. Although it was simple to use, it didn't fulfill the hype that surrounded the announcement that the 2002 release of Visual Basic .NET would be an object-oriented programming (OOP) system. Therefore, Microsoft also added structured error handling to the language, a method that uses standard objects to communicate errors, and error-handling code that is more tightly integrated with the code it monitors.
This form of error processing uses a multiline Try...Catch...Finally statement to catch and handle errors.
' ----- Add error-prone code here.
Catch ex As Exception
' ----- Error-handling code here.
' ----- Cleanup code goes here.
End Try
The Try Clause
Try statements are designed to monitor smaller chunks of code. Although you could put all the source code for your procedure within the Try block, it's more common to put within that section only the statements that are likely to generate errors.
My.Computer.FileSystem.RenameFile(existingFile, newName)
"Safe" statements can remain outside the Try portion of the Try...End Try statement. Exactly what constitutes a "safe" programming statement is a topic of much debate, but two types of statements are generally unsafe: (1) those statements that interact with external systems, such as disk files, network or hardware resources, or even large blocks of memory; and (2) those statements that could cause a variable or expression to exceed the designed limits of the data type for that variable or expression.
The Catch Clause
The Catch clause defines an error handler. As with unstructured error handling, you can include one global error handler in a Try statement, or you can include multiple handlers for different types of errors. Each handler includes its own Catch keyword.
Catch ex As ErrorClass
The ex identifier provides a variable name for the active error object that you can use within the Catch section. You can give it any name you wish; it can vary from Catch clause to Catch clause, but it doesn't have to.
ErrorClass identifies an exception class, a special class specifically designed to convey error information. The most generic exception class is System.Exception; other, more specific exception classes derive from System.Exception. Since Try...End Try implements "object-oriented error processing," all the errors must be stored as objects. The .NET Framework includes many predefined exception classes already derived from System.Exception that you can use in your application. For instance, System.DivideByZeroException catches any errors that (obviously) stem from dividing a number by zero.
result = firstNumber / secondNumber
Catch ex As System.DivideByZeroException
MsgBox("Divide by zero error.")
Catch ex As System.OverflowException
MsgBox("Divide resulting in an overflow.")
Catch ex As System.Exception
MsgBox("Some other error occurred.")
End Try
When an error occurs, your code tests the exception against each Catch clause until it finds a matching class. The Catch clauses are examined in order from top to bottom, so make sure you put the most general one last; if you put System.Exception first, no other Catch clauses in that Try block will ever trigger because every exception matches System.Exception. How many Catch clauses you include, or which exceptions they monitor, is up to you. If you leave out all Catch clauses completely, it will act somewhat like an On Error Resume Next statement, although if an error does occur, all remaining statements in the Try block will be skipped. Execution continues with the Finally block, and then with the code following the entire Try statement.
The Finally Clause
The Finally clause represents the "do this or die" part of your Try block. If an error occurs in your Try statement, the code in the Finally section will always be processed after the relevant Catch clause is complete. If no error occurs, the Finally block will still be processed before leaving the Try statement. If you issue a Return statement somewhere in your Try statement, the Finally block will still be processed before leaving the routine. (This is getting monotonous.) If you use the Exit Try statement to exit the Try block early, the Finally block is still executed. If, while your Try block is being processed, your boss announces that a free catered lunch is starting immediately in the big meeting room and everyone is welcome, the Finally code will also be processed, but you might not be there to see it.
Finally clauses are optional, so you include one only when you need it. The only time that Finally clauses are required is when you omit all Catch clauses in a Try statement.
I showed you earlier in the chapter how unhandled errors can lead to data corruption, crashed applications, and spiraling, out-of-control congressional spending. All good programmers understand how important error-handling code is, and they make the extra effort of including either structured or unstructured error-handling code. Yet there are times when I, even I, as a programmer, think, "Oh, this procedure isn't doing anything that could generate errors. I'll just leave out the error-handling code and save some typing time." And then it strikes, seemingly without warning: an unhandled error. Crash! Burn! Another chunk of user data confined to the bit bucket of life.
Normally, all unhandled errors "bubble up" the call stack, looking for a procedure that includes error-handling code. For instance, consider this code:
Private Sub Level1( )
On Error GoTo ErrorHandler
Level2( )
MsgBox("Error Handled.")
Resume Next
End Sub
Private Sub Level2( )
Level3( )
End Sub
Private Sub Level3( )
' ----- The Err.Raise method forces an
' unstructured-style error.
End Sub
When the error occurs in Level3, the application looks for an active error handler in that procedure, but finds nothing. So, it immediately exits Level3 and returns to Level2, where it looks again for an active error handler. Such a search will, sadly, be fruitless. Heartbroken, the code leaves Level2 and moves back to Level1, continuing its search for a reasonable error handler. This time it finds one. Processing immediately jumps down to the ErrorHandler block and executes the code in that section.
If Level1 didn't have an error handler, and no code farther up the stack included an error handler, the user would see the Error Message Window of Misery (refer to Figure 9.4, "An error message only a fascist dictator could love"), followed by the Dead Program of Disappointment.
Fortunately, Visual Basic does support a "catchall" error handler that traps such unmanaged exceptions and lets you do something about them. This feature works only if you have the "Enable application framework" field selected on the Application tab of the project properties. To access the code template for the global error handler, click the View Application Events button on that same project properties tab. Select "(MyApplication Events)" from the Class Name drop-down list above the source code window, and then select UnhandledException from the Method Name list. The following procedure appears in the code window:
Private Sub MyApplication_UnhandledException( _
ByVal sender As Object, _
ByVal e As Microsoft.VisualBasic. _
ApplicationServices.UnhandledExceptionEventArgs) _
Handles Me.UnhandledException
End Sub
Add your special global error-handling code to this routine. The e event argument includes an Exception member that provides access to the details of the error via a System.Exception object. The e.ExitApplication member is a Boolean property that you can modify either to continue or to exit the application. By default, it's set to True, so modify it if you want to keep the program running.
Even when the program does stay running, you will lose the active event path that triggered the error. If the error stemmed from a click on some button by the user, that entire Click event, and all of its called methods, will be abandoned immediately, and the program will wait for new input from the user.
In addition to simply watching for them and screaming "Error!" there are a few other things you should know about error management in Visual Basic programs.
Generating Errors
Believe it or not, there are times when you might want to generate runtime errors in your code. In fact, many of the runtime errors you encounter in your code occur because Microsoft wrote code in the Framework Class Libraries (FCLs) that specifically generates errors. This is by design.
Let's say that you had a class property that was to accept only percentage values from 0 to 100, but as an Integer data type.
Private StoredPercent As Integer
Public Property InEffectPercent( ) As Integer
Return StoredPercent
End Get
Set(ByVal value As Integer)
StoredPercent = value
End Set
End Property
Nothing is grammatically wrong with this code, but it will not stop anyone from setting the stored percent value to either 847 or −847, both outside the desired range. You can add an If statement to the Set accessor to reject invalid data, but properties don't provide a way to return a failed status code. The only way to inform the calling code of a problem is to generate an exception.
Set(ByVal value As Integer)
If (value < 0) Or (value > 100) Then
Throw New ArgumentOutOfRangeException("value", _
value, "The allowed range is from 0 to 100.")
StoredPercent = value
End If
End Set
Now, attempts to set the InEffectPercent property to a value outside the 0-to-100 range will generate an error, an error that can be caught by On Error or Try...Catch error handlers. The Throw statement accepts a System.Exception (or derived) object as its argument, and sends that exception object up the call stack on a quest for an error handler.
Similar to the Throw statement is the Err.Raise method. It lets you generate errors using a number-based error system more familiar to Visual Basic 6.0 and earlier environments. I recommend that you use the Throw statement, even if you employ unstructured error handling elsewhere in your code.
Mixing Error-Handling Methods
You are free to mix both unstructured and structured error-handling methods broadly in your application, but a single procedure or method may use only one of these methods. That is, you may not use both On Error and Try...Catch...Finally in the same routine. A routine that uses On Error may call another routine that uses Try...Catch...Finally with no problems.
Now you may be thinking to yourself, "Self, I can easily see times when I would want to use unstructured error handling, and other times when I would opt for the more structured approach." It all sounds very reasonable, but let me warn you in advance that there are error-handling zealots out there who will ridicule you for decades if you ever use an On Error statement in your code. For these programmers, "object-oriented purity" is essential, and any code that uses nonobject methods to achieve what could be done through an OOP approach must be destroyed.
I'm about to use a word that I forbid my elementary-school-aged son to use. If you have tender ears, cover them now, though it won't protect you from seeing the word on the printed page.
Rejecting the On Error statement like this is just plain stupid. As you may remember from earlier chapters, everything in your .NET application is object-oriented, since all the code appears in the context of an object. If you are using unstructured error handling, you can still get to the relevant exception object through the Err.GetException( ) method, so it's not really an issue of objects.
Determining when to use structured or unstructured error handling is no different from deciding to use C# or Visual Basic to write your applications. For most applications, the choice is irrelevant. One language may have some esoteric features that may steer you in that direction (such as optional method arguments in Visual Basic), but the other 99.9% of the features are pretty much identical.
The same is true of error-handling methods. There may be times when one is just plain better than the other. For instance, consider the following code that calls three methods, none of which includes its own error handler:
On Error Resume Next
RefreshPart1( )
RefreshPart2( )
RefreshPart3( )
Clearly, I don't care whether an error occurs in one of the routines or not. If an error causes an early exit from RefreshPart1, the next routine, RefreshPart2, will still be called, and so on. I often need more diligent error-checking code than this, but in low-impact code, this is sufficient. To accomplish the same thing using structured error handling would be a little more involved.
RefreshPart1( )
End Try
RefreshPart2( )
End Try
RefreshPart3( )
End Try
That's a lot of extra code for the same functionality. If you're an On Error statement hater, by all means use the second block of code. But if you are a more reasonable programmer, the type of programmer who would read a book such as this, use each method as it fits into your coding design.
The System.Exception Class
The System.Exception class is the base class for all structured exceptions. When an error occurs, you can examine its members to determine the exact nature of the error. You also use this class (or one of its derived classes) to build your own custom exception in anticipation of using the Throw statement. Table 9.1, "Members of the System.Exception class" lists the members of this object.
Table 9.1. Members of the System.Exception class
Object member
Data property
Provides access to a collection of key-value pairs, each providing additional exception-specific information.
HelpLink property
Identifies online help location information relevant to this exception.
InnerException property
If an exception is a side effect of another error, the original error appears here.
Message property
A textual description of the error.
Source property
Identifies the name of the application or object that caused the error.
StackTrace property
Returns a string that fully documents the current stack trace, the list of all active procedure calls that led to the statement causing the error.
TargetSite property
Identifies the name of the method that triggered the error.
Classes derived from System.Exception may include additional properties that provide additional detail for a specific error type.
The Err Object
The Err object provides access to the most recent error through its various members. Anytime an error occurs, Visual Basic documents the details of the error in this object's members. It's often accessed within an unstructured error handler to reference or display the details of the error. Table 9.2, "Members of the Err object" lists the members of this object.
Table 9.2. Members of the Err object
Object member
Clear method
Clear all the properties in the Err object, setting them to their default values. Normally, you use the Err object only to determine the details of a triggered error. But you can also use it to initiate an error with your own error details. See the description of the Raise method later in the table.
Description property
A textual description of the error.
Erl property
The line number label nearest to where the error occurred. In modern Visual Basic applications, numeric line labels are almost never used, so this field is generally 0.
HelpContext property
The location within an online help file relevant to the error. If this property and the HelpFile property are set, the user can access relevant online help information.
HelpFile property
The online help file related to the active error.
LastDLLError property
The numeric return value from the most recent call to a pre-.NET DLL, whether it is an error or not.
Number property
The numeric code for the active error.
Raise method
Use this method to generate a runtime error. Although this method does include some arguments for setting other properties in the Err object, you can also set the properties yourself before calling the Raise method. Any properties you set will be retained in the object for examination by the error-handler code that receives the error.
Source property
The name of the application, class, or object that generated the active error.
The Debug Object
Visual Basic 6.0 (and earlier) included a handy tool that would quickly output debug information from your program, displaying such output in the "Immediate Window" of the Visual Basic development environment.
Debug.Print "Reached point G in code"
The .NET version of Visual Basic enhances the Debug object with more features, and a slight change in syntax. The Print method is replaced with WriteLine; a separate Write method outputs text without a final carriage return.
Debug.WriteLine("Reached point G in code")
Everything you output using the WriteLine (or similar) method goes to a series of "listeners" attached to the Debug object. You can add your own listeners, including output to a work file. But the Debug object is really used only when debugging your program. Once you compile a final release, none of the Debug-related features works anymore, by design.
If you wish to log status data from a released application, consider using the My.Application.Log object instead (or My.Log in ASP.NET programs). Similar to the Debug object, the Log object sends its output to any number of registered listeners. By default, all output goes to the standard debug output (just like the Debug object) and to a logfile created specifically for your application's assembly. See the online help for the My.Application.Log object for information on configuring this object to meet your needs.
Other Visual Basic Error Features
The Visual Basic language includes a few other error-specific statements and features that you may find useful:
ErrorToString function
This method returns the error message associated with a numeric system error code. For instance, ErrorToString(10) returns "This array is fixed or temporarily locked." It is useful only with older unstructured error codes.
IsError function
When you supply an object argument to this function, it returns True if the object is a System.Exception (or derived) object.
The best program in the world would never generate errors, I guess. But come on, it's not reality. If a multimillion-dollar Mars probe is going to crash on a planet millions of miles away, even after years of advanced engineering, my customer-tracking application for a local video rental shop is certainly going to have a bug or two. But you can mitigate the impact of these bugs using the error-management features included with Visual Basic.
This chapter's project code will be somewhat brief. Error-handling code will appear throughout the entire application, but we'll add it in little by little as we craft the project. For now, let's just focus on the central error-handling routines that will take some basic action when an error occurs anywhere in the program. As for lambda expressions, we'll hold off on such code until a later chapter.
General Error Handler
As important and precise as error handling needs to be, the typical business application will not encounter a large variety of error types. Applications such as the Library Project are mainly vulnerable to three types of errors: (1) data entry errors; (2) errors that occur when reading data from, or writing data to, a database table; and (3) errors related to printing. Sure, there may be numeric overflow errors or other errors related to in-use data, but it's mostly interactions with external resources, such as the database, that concern us.
Because of the limited types of errors occurring in the application, it's possible to write a generic routine that informs the user of the error in a consistent manner. Each time a runtime error occurs, we will call this central routine, just to let the user know what's going on. The code block where the error occurred can then decide whether to take any special compensating action, or continue on as though no error occurred.
Load the Chapter 9, Functional Programming (Before) Code project, either through the New Project templates or by accessing the project directly from the installation directory. To see the code in its final form, load Chapter 9, Functional Programming (After) Code instead.
In the project, open the General.vb class file, and add the following code as a new method to Module General.
Insert Chapter 9, Functional Programming, Snippet Item 1.
Public Sub GeneralError(ByVal routineName As String, _
ByVal theError As System.Exception)
' ----- Report an error to the user.
MsgBox("The following error occurred at location '" & _
routineName & "':" & vbCrLf & vbCrLf & _
theError.Message, _
MsgBoxStyle.OKOnly Or MsgBoxStyle.Exclamation, _
End Sub
Not much to that code, is there? So, here's how it works. When you encounter an error in some routine, the in-effect error handler calls the central GeneralError method.
Public Sub SomeRoutine( )
On Error GoTo ErrorHandler
' ----- Lots of code here.
GeneralError("SomeRoutine", Err.GetException( ))
Resume Next
End Sub
You can use it with structured errors as well.
' ----- Troubling code here.
Catch ex As System.Exception
GeneralError("SomeRoutine", ex)
End Try
The purpose of the GeneralError global method is simple: communicate to the user that an error occurred, and then move on. It's meant to be simple, and it is simple. You could enhance the routine with some additional features. Logging of the error out to a file (or any other active log listener) might assist you later if you needed to examine application-generated errors. Add the following code to the routine, just after the MsgBox command, to record the exception.
Insert Chapter 9, Functional Programming, Snippet Item 2.
Of course, if an error occurs while writing to the log, that would be a big problem, so add one more line to the start of the GeneralError routine.
Insert Chapter 9, Functional Programming, Snippet Item 3.
On Error Resume Next
Unhandled Error Capture
As I mentioned earlier, it's a good idea to include a global error handler in your code, in case some error gets past your defenses. To include this code, display all files in the Solution Explorer using the Show All Files button, open the ApplicationEvents.vb file, and add the following code to the MyApplication class.
Insert Chapter 9, Functional Programming, Snippet Item 4.
Private Sub MyApplication_UnhandledException( _
ByVal sender As Object, ByVal e As Microsoft. _
VisualBasic.ApplicationServices. _
UnhandledExceptionEventArgs) Handles _
' ----- Record the error, and keep running.
e.ExitApplication = False
GeneralError("Unhandled Exception", e.Exception)
End Sub
Since we already have the global GeneralError routine to log our errors, we might as well take advantage of it here.
That's it for functional and error-free programming. In the next chapter, which covers database interactions, we'll make frequent use of this error-handling code.
© 2016 Microsoft |
Thursday, October 25, 2012
Sustainable City Services: Housing
Sustainable City Services: Cycle of Housing Stock and Age of Residents
When looking at city services, do the needs fluctuate from decade to decade or half century to half century? For example, robust school enrollment followed by declining enrollment and then a rise in enrollment again. With the approaching "grey tsunami," the need for senior services will greatly increase. However, in roughly 40 years when the Boomer generation is mostly gone the need for the senior services will be reduced.
Littleton, located near Denver, has done a good job in making the city an enjoyable place to live. Many residents want to stay in the city as they age. However, a number of residents have expressed a desire to downsize their larger houses as they become empty nesters. The problem is limited housing options within city limits, much less within the family neighborhoods that they have grown to love. Thus, to downsize, they would have to move to a new neighborhood or out of the city. The challenges that Littleton faces is becoming more widespread among cities.
This lack of housing appealing to older adults appears to be a major reason for school enrollment fluctuation. The cycle of young families moving into newly constructed neighborhoods is reflected in school enrollment increasing. As the families age, the kids graduate and move out of the house, but the parents cannot downsize, which keeps the family house under occupied for 10, 20, 30 plus years. Can schools weather the long cycles of housing turnover to young families?
As a case example, the graph below shows the fluctuation of Littleton Public Schools enrollment.1 In the last few years, Littleton did close 2 schools due to falling enrollment.
The second graph shows the cycle in average household size, assuming a built-out community where people do not want to move out as they age. Analogous graphs would be needed for non-land locked communities still expanding in housing stock and population.
In this graph, the yellow bars represent time periods in which the city needs to provide more services and infrastructure for families with kids, such as schools and kid oriented events. The grey bars represent time periods in which the city needs to provide additional services for older residents such as shuttle service to grocery stores.
Some research indicates that in locations not having enough families in the neighborhood has some undesired consequences such as bus routes reduced when too high a percentage of passengers qualify for reduced senior rates and grocery stores relocate. Fluctuation in needed services is expensive for cities. A more constant average household size makes providing services easier. The following are housing considerations to help provide an environment for a more stable average household size.
Potential Solutions for Existing Neighborhoods:
Modifying existing neighborhoods is one of the harder challenges for improving the city's housing stock. The following ideas are small scale and should allow existing neighborhoods to evolve over time. For all of these options, there can be too much of a good thing. The recommendation is to limit the density of each housing type.
1. Group housing typically looks like a single-family residence from the outside and located within single-family neighborhoods. Inside, each resident has a private bedroom and possibly a private bathroom. The rest of the house is common space shared by all residents. A certified nurse or care giver may reside on site or visit regularly. Not all zoning codes allow group housing, but allowing group housing will provide more options for older adults.
1. Accessory dwelling units (ADU) allows a 2nd unit to be built on a lot with an existing house. The ADU may house a recent college graduate looking for a job or an elderly parent. ADUs can also be rented to non-family members. Again, not all zoning allows ADUs. However, including ADUs in the housing mix can increase options for families as well as provide additional rental options.
1. ADUs can be taken to the next level by allowing the ADU to be sold independent of the main house. This can provide additional flexibility for the homeowner. To encourage more accessible housing, zoning could allow the minimum lot size to be ½ the current size provided that a "universal design" house built on each ½ lot. This would allow a homeowner in an existing single-family neighborhood to scrap the house, replace with 2 universal design houses, and potentially live in one of the houses while selling the other house to pay off the construction loan.
For New/Infill Projects:
For new or lager infill projects, cities could require variety in the housing sizes to help achieve a more constant average household size over time. These new projects present opportunities to make a significant impact on the future direction of the city.
1. For a mixed generational neighborhood, every 3rd or 4th unit should be a different size. For example, if the development is primarily a family neighborhood with 3+ bedroom houses, the "other" houses would be smaller such as patio homes for older adults. The housing mix should attract singles, couples, families with children, and empty nesters.
1. For multi-family units, require a mix of 1, 2, and 3+ bedrooms to accommodate all family sizes. All too often, multi-family is not family-friendly. This needs to change to allow for more affordable family size housing options. Accessibility is big for older adults, so ensuring an adequate mix of accessible units is also very important.
One of the goals is to support people living and aging in their city. Thus, aging in neighborhood is balanced by optimizing community resources in that larger houses are primarily occupied by larger households. When downsizing, hopefully the person or couple is literally only moving a few feet to a familiar house, thereby minimizing the stress associated with moving. A new young family now has the opportunity to move into the city and live in the larger house.
Thursday, September 13, 2012
Sustainability Toolkit for Family Friendly Cities
Our slides for "Sustainability Toolkit for Family Friendly Cities" presented at the Downtown Colorado Inc conference September 13, 2012 are located at:\SustainablityToolkitFamilyFriendlyCities.pdf (11M)
Outline of presentation:
• Human Sustainability Introduction
• Motivation for Multi-Generational Planning
• 12 Guiding Principles for Family Friendly Cities
• Mapping of City Family, Business, and Resource Centers
• Case Example for Littleton, CO
• Case Example of Mapping for Denver Metro with Light Rail to Connect Family and Business Centers
• Sustainability Toolkit:
• Social
• Multi-Generational Public Spaces
• Housing
• Multi-Generational Housing
• Affordable Eco-Friendly Living
• Mobility
• Multi-Modal Streets
• Personal Transportation Hubs
• Education
• Purposeful Education
• Universal Resources
• More Land for Living and Food
• Water Scarcity Leads to Abundance
Each Toolkit item includes a description with design considerations, benefits for various stakeholders, and case studies.
Charging Up Your City
6 vehicle spaces converted to 12 LSV spaces with a new picnic area
Charging Up Your City
Do you want to charge up your city? The following are some thoughts for charging up your community's excitement level, charging up your electrical devices, and charging up your economy.
The Denver Post June 24, 2012 article, “Getting a charge out of new law,” described a new state law effective in August. This law will allow anyone to sell electricity. One potential application is for electrical charging stations spread throughout the city. The law is aimed at charging electric vehicles, but the charging stations could be used to charge any electrical device. Consider for example, a charging station at a park. The same station could be used to charge an electric vehicle as well as a laptop, cell phone, or tablet.
For the owner of the charging station, this could be an additional source of income, be it the government entity that owns the park or a business owner with some unused rooftop space for solar panels.
The electric vehicle owners will have convenient, reasonably priced electricity to reduce "range anxiety" concerns. Refill costs may be “$4 instead of $40 or more for a tank of gas.” With the reduced emissions from the electric vehicles, everyone will benefit from the cleaner air. This could be a win-win for all stakeholders.
As much potential as the reselling of the electricity holds, this is only the tip of the iceberg, so to speak. The big impact could come from combining the charging stations with low speed vehicles (LSVs).
Of the many types of vehicles permitted on the road today in Colorado, low speed vehicles could be an up and coming mode of transportation. LSVs include neighborhood electric vehicles and golf cars (golf carts with head lights and brake lights). LSVs are already permitted on roads with a speed limit of 35 mph or less and can cross roads with a higher speed limit at intersections.
For car drivers, the LSVs are significantly lower cost. For senior drivers, they are also safer to operate. Thus, LSVs can provide mobility options for later in life.
The real potential of LSVs is the fact that the vehicles are one forth the size of traditional gas powered motor vehicles. Thus, if there were enough users of LSVs, the city could restripe a few parking lots and get a 4 fold increase in parking. Or, some of the space could be reclaimed for non-parking uses. For example, consider the economic impact of this reclaimed land used for additional outdoor seating at a restaurant. Developers could use this extra space for a larger building footprint, which may translate to more sales tax revenue or property tax revenue.
There are a number of actions that planners can take:
1) Promote the use of LSV noting the benefits such as lower cost, safer, lower emissions, smaller size, and more non-parking space for developers/landowners.
2) Make sure that all zoning and ordinances allow for the charging stations and solar panel installation.
3) Make sure that parking requirements are properly adjusted for LSVs.
4) Create a connectivity map of the city noting key destinations such as stores, parks, schools, and residences and roads with speed limits <= 35 mph. If the key destinations are not connected, see if there is an easy way to add the missing connectivity such as changing a speed limit from 45 to 35 mph.
5) Determine how to best reallocate the reclaimed land to benefit the city and its residents.
NOTE: This was also published in APA Colorado Q3 newsletter
Wednesday, September 5, 2012
Roofs as Economic Generators
Creating and improving the economic potential for communities is always beneficial, but especially with the current great recession. These two areas can be addressed by looking up. As an exercise, go to a tall building in the city, look at a Google maps view of the city, rent a hot air balloon ride, or in some way look down at your city. In most cases, there will be a number of big flat roofs. These roofs are a missed opportunity for economic growth of the community.
Some potential economic generators for rooftops include:
• PV Solar Panels
• Green Roof
• Edible Green Roof
The PV solar panels would, somewhat obviously, generate electricity that the building owner can use to reduce their electric bill. In the extreme case, there will be a surplus of electricity that the building owner can sell back to the local utility company and actually make money instead of just saving money.
The state of Colorado recently passed a law making it much easier for non-utility companies to resell electricity. For example, the building owner could set up electric vehicle charging stations on their property and make some income from reselling the electricity.
New Jersey and California are using public-private partnerships to get solar panels installed, for example, on school rooftops.
If enough rooftops install solar, and the region is getting close to exceeding current electric plant capability, these rooftop electric generators could prevent the region from having to invest in a new power plant, thus saving everyone an electric rate increase to pay for the new plant.
In addition to reducing urban heat islands, thereby making the city more people friendly, green roofs can significantly reduce the building's heating and cooling needs. One on-line estimator provides some estimates for the cost savings. Some example yearly cost savings:
• Arizona State Capital Building = $24,266
• The Pentagon = $1.1 million
• Walmart SuperCenter in Bentonville, Arkansas = $179,300
Hopefully, this saved money will work its way back to the city and the residents. The concept can also be used for city buildings and the saved money can be used elsewhere within the city budget.
Extending the heating and cooling benefits of green roofs to edible green roofs would have the additional benefit of increasing local jobs as well as local food. Some interesting examples of large rooftop gardens are coming from New York.
Rooftop farms are in the midst of a boom here in New York City
Driving that boom – at least in part – is New York City’s Zone Green. Proposed amendments to Gotham’s zoning code that continue an inexorable march through the approval process, Zone Green would permit solar panels, green roofs, storm water systems, skylights and other green features on New York City buildings, despite existing restrictions within the 1961 code. Specifically with respect to rooftop farms, Zone Green would allow a waiver of floor area and height limits for greenhouses on top of non-residential buildings.
A second rooftop garden example comes from Brooklyn. This one is a hydroponics greenhouse on top of a warehouse rooftop that is harvesting 365 days a year.
Another agricultural example comes from Denver. The Brown Palace hotel in downtown Denver has added 4 bee hives and 65,000 bees to their roof and "can produce upwards of 150 pounds of harvestable honey every summer". "Marcel Pitton, managing director of The Brown Palace, said the honey is like 'liquid gold' for the hotel. The honey is used in the restaurant kitchen and as the basis for a lavender honey soap and a local beer, made with the Wynkoop Brewing Company." The article states that the beekeeping program started "three years ago, after the city passed an ordinance in 2008 allowing hobbyists to own hives." Thus, emphasizing the importance of planners working with the city to make the conditions appropriate for roofs to become economic generators.
Rooftop gardening can be practical with the only change to the roof being the addition of light weight planters. In 2011, "450 urban agriculture planters were installed on the roof of the Palais des congrès, allowing three partner restaurants (Crudessence, the Palais’ catering service, and Intercontinental hotel) to learn more about the basics of market gardening in cities and offer a wide variety of produce on their menu for those who want to eat locally and in season." (Source article)
As one example, Biotop has an edible roof integrated system for growing food. It is lightweight and can be installed on an existing roof without structural modifications. The Montreal Convention Center installed this system over the summer during the Ecocity World Summit 2011. The food grown went to local restaurants.
Here are 7 brainstorm ideas for how cities can help to turn the city's unused rooftops into economic generators:
1) Comprehensive Plan and Sustainability Plan
2) One Page Briefs
3) Free Advertising
4) Zoning and Ordinances
5) Open Space Reduction for Green Roofs with Additional Credit for Growing Food on Roof
6) Make it Easy
7) (Last Resort) Unused Rooftop Tax
First, make sure that the city's Comprehensive and Sustainability Plans include using the rooftops in creative ways, so as to encourage economic benefit to the building owner as well as the city and its residents.
Second, create one page briefs. These would be short educational brochures to inform the building owners of the potential cost savings or income potential from different roof uses. Logistic, cost, and other "barriers to entry" should also be noted to help the building owner decide if one of the options is not appropriate for their building.
Third, do some free advertising for building owners that implement a rooftop economic generator. For example, highlight the business on the city web site or in the local newspaper. Use the company as a case example highlighting what it took the company to implement the economic generator. This could be framed as this month's "Rooftop Economic Generator Winner". Over time, revisit the company and have another article, or award, highlighting the economic benefits that the company has generated from the rooftop effort.
Forth, make sure that all zoning and ordinances allow these economic generators on the roof. For example, see New YorkCity’s Zone Green above. Also, for the edible rooftops, allowing food stands to be set up around the city would help the food growers to sell the produce to residents. Wheat Ridge, CO claims to have one of the most liberal food stand policies in the U.S., allowing food stands almost anywhere: "This ordinance updated the city's regulations so that community garden (under the category "urban gardens"), farmer's markets, and produce stands are now allowed in any zone district."
Fifth, which is a subset of Zoning and Ordinances, is to allow green roofs to count towards open space requirements. This will give developers more ground space for development, increasing their profit. This should also increase city tax revenue (sales tax for increased business size, increased property tax from larger building/more housing units).
Sixth, make it easy. For the most part, the building owner is most likely not concerned about trying to run a second business. A big box chain store will not want to change their focus. A small mom and pop business may already be stretched on available time and brain power. To overcome this, the city can do some of the leg work for the building owners. For example, consolidate all needed forms and steps into a pre-packaged form. Consider the case of an edible green roof, the city can have on hand a leasing contract that the building owner can use with the farmer that wants to lease the roof space. The building owner can then use these forms verbatim, or modify them as desired. As a case study, look at Santa Cruz. They have 7 pre-approved ADU packages that, if selected by the home owner, allows for shortening the building approval process.
Seventh, create a new "unused rooftop" tax. This should only be used if all other means to encourage effective rooftop uses fails. The concept for this tax is that if a commercial building owner with a flat roof does not implement an approved economic generator use for a "sufficient" percentage of their roof, than the building owner has to pay a tax for this unused space. If the building owner does implement an approved use for enough of their roof, then there is no tax.
By looking up, cities can "create" additional space to plan for the future. With some creating thinking, the unused space rooftops in the city can become an economic generator.
Wednesday, June 13, 2012
Speaking at APA Colorado Conference
We have been accepted to speak at the APA CO conference in Snowmass Colorado ( The talk is scheduled for the afternoon of Fri. Oct. 5, 2012. Title: "Babies & Boomers - Assessment Tools to Create Livable, Sustainable Communities for All Ages"
This is a joint presentation with the DRCOG representatives who will discuss the Boomer Bond project including the city staff based self-assessment and toolkit items to better prepare the city for the pending significant increase in senior population.
HLP will expend the discussion to include all ages. We will present our city ranking system, which is a data driven system using information from the internet. Our focus includes eco-friendly, sustainability, and designing for all ages.
Monday, April 30, 2012
Speaking at Downtown Colorado Inc. Annual Conference
Jenny has been accepted to present at the Downtown Colorado Inc. annual conference in Golden Colorado ( The talk is currently scheduled for Thur. Sept. 13 9:30 am. The talk title is "Sustainability Toolkit for Family-Friendly Cities". We are hoping to present a number of innovative ideas to help cities be more eco and family friendly by designing for the youngest to the oldest resident.
Thursday, January 12, 2012
Personal Transportation Hubs: Supporting Alternative Transportation
|
Geostatistical and multi-elemental analysis of soils to interpret land-use history in the Hebrides, Scotland
In the absence of documentary evidence about settlement form and agricultural practice in northwest Scotland before the mid-18th century, a geoarchaeological approach to reconstructing medieval land use and settlement form is presented here. This study applies multielemental analysis to soils previously collected from a settlement site in the Hebrides and highlights the importance of a detailed knowledge of the local soil environment and the cultural context. Geostatistical methods were used to analyze the spatial variability and distribution of a range of soil properties typically associated with geoarchaeological investigations. Semivariograms were produced to determine the spatial dependence of soil properties, and ordinary kriging was undertaken to produce prediction maps of the spatial distribution of these soil properties and enable interpolation over nonsampled locations in an attempt to more fully elucidate former land-use activity and settlement patterns. The importance of identifying the spatial covariance of elements and the need for several lines of physical and chemical evidence is highlighted. For many townships in the Hebrides, whose precise location and layout prior to extensive land reorganization in the late 18th–early 19th century is not recoverable through plans, multi-elemental analysis of soils can offer a valuable prospective and diagnostic tool. © 2007 Wiley Periodicals, Inc. |
What is Logging?
This blog post touches on the basics of logging and looks to answer the question, what is logging? When logging the timber is used for a number of things, from creating furniture for schools, hospitals and offices, to being used for paper and card boxes. Logging has been featured on thousands of news articles and news stories over the years as traders do illegal logging and kill off animals and tribes.
However now there are charity’s setup that monitor illegal logging and try to catch these people in the act and charge them hefty. This is reducing illegal logging hugely around the world but there still are illegal loggers taking the risk and cutting.
Issues with illegal logging
• Animals: Many species and that need trees for them homes and source of food will become extinct. Animals rely on trees for food and a home, the more trees we cut the more animals begin to die and become extinct.
• Flooding: Trees are very important soaking in heavy rains. As it begins to rain the water seeps into the group and is absorbed by the trees roots. The huge trees drink plenty of water and when the monsoon rains come the heavens open up and bless the grounds. Almost all of this water is taken in by the trees, however if these trees are cut then this water will not be soaked in by the ground and plants and risk flooding in homes and lands.
• Climate Change: Trees are really helpful for the planet, trees release oxygen that is taken in by humans and mammals to help them live. Trees also take it carbon dioxide which is harmful gas that causes global warming. With less trees it would mean less oxygen for living inhabitants of the earth to take in and less carbon dioxide being taken in by trees causing further climate change. It is possible for us all to do our bit to reduce our carbon foot print such as recycling plastic bottles, cans and newspapers. Many metals are easy to recycle and can be turned into useful products such as car parts, school lockers or machinery.
How to do your bit and save the trees?
Us humans are the only species that are civilized to make and break the planet earth, because of us trees and becoming extinct. Here are a few things you could do to help the planet:
• Recycle Paper: When using paper always use both sides of the paper, and once you are done with the paper place it in a recycling bin so that it can be re-used. If you find that you have not all of the paper, be sure to have a scrap paper bin where you put paper in the bin that can be re-used to write scraps.
• Buy Recycled Certified: I can not stress how important it is, buy certified wood products as then you know the materials used are from a sustainable forest. Make sure everything you purchase it certified, so goods such as paper, furniture, pens and pencils. As if you purchase certified wood products you are help fund loggers that do ethical logging like myself who plants 3 trees for every 1 we cut.
This video gives you an idea how deforestation causes issues to the planet: |
Quantum computers CQCL
China moves closer to “hack proof” quantum communications network iStock
China is set to complete the installation of the world's longest quantum communication network stretching 2,000km (1,240miles) from Beijing to Shanghai by 2016, say scientists leading the project. Quantum communications technology is considered to be "unhackable" and allows data to be transferred at the speed of light.
By 2030, the Chinese network would be extended worldwide, the South China Morning Post reported. It would make the country the first major power to publish a detailed schedule to put the technology into extensive, large-scale use.
The development of quantum communications technology has accelerated in the last five years. The technology works by two people sharing a message which is encrypted by a secret key made up of quantum particles, such as polarized photons. If a third person tries to intercept the photons by copying the secret key as it travels through the network, then the eavesdropper will be revealed by virtue of the laws of quantum mechanics – which dictate that the act of interfering with the network affects the behaviour of the key in an unpredictable manner.
If all goes to schedule, China would be the first country to put a quantum communications satellite in orbit, said Wang Jianyu, deputy director of the China Academy of Science's (CAS) Shanghai branch. At a recent conference on quantum science in Shanghai, Wang said scientists from CAS and other institutions have completed major research and development tasks for launching the satellite equipped with quantum communications gear, South China Morning Post said.
The potential success of the satellite was confirmed by China's leading quantum communications scientist, Pan Jianwei, a CAS academic who is also a professor of quantum physics at the University of Science and Technology of China (USTC) in Hefei, in the eastern province of Anhui. Pan said researchers reported significant progress on systems development after conducting experiments at a test center in the northwest of China.
The satellite would be used to transmit encoded data through a method called quantum key distribution (QKD), which relies on cryptographic keys transmitted via light pulse signals. QKD is said to be nearly impossible to hack, since any attempted eavesdropping would change the quantum states and thus could be quickly detected by dataflow monitors.
It's likely the technology initially will be used to transmit sensitive diplomatic, government policy and military information. Future applications could include secure transmissions of personal and financial data, Xinhua reported.
Governments in Europe, Japan and Canada are about to launch their own quantum communication satellite projects and a private company in the US has been seeking funding from the federal government with a proposal for a 10,000km network linking major cities.The Beijing to Shanghai project was launched last year. Although the Chinese government has not revealed the projects budget, scientists told state media that the construction cost would be ¥100m (£10.17m) for every 10,000 users, according to the South China Morning Post. |
This story is about a proposal, by Planktos, Inc., to stimulate large-scale natural marine plankton blooms by "iron seeding", in-situ, for the purpose of scouring carbon dioxide from the air. Why do this? One reason is that if our collective response to Climate Change is limited to lessening future CO2 emissions with better technologies today, these same technologies may be used more intensely in the future. Such a narrow focus also overlooks legacy emissions of CO2 that will remain in the stratosphere for decades, possibly capable of cascading us, in one highly uncertain scenario, to a catastrophy which no amount of new technology or life style changing could reverse. The Planktos proposal would mitigate both the legacy CO2 emissions and future emissions from higher performing technologies. The response time for plankton stimulation is relatively short and done with more finality than other biotic mitigation techniques, such as tree planting. And, it does something crucial that "carbon sequestration", can not, which we'll explain a bit later.You've already seen Planktos' celebrity mascot, Pico, who is only a few microns in diameter (opening photo). Their communications man Dave introduced Pico thusly: "Our wild and crazy Planktos mascot, Pico, is our coccolithophorid sidekick. He runs on chlorophyll & carotenoids, forms massive blooms, sequesters his CO2 inhalations in CaCO3 scales, and then sinks in great numbers to form deposits of chalk".
Like other celebrities, Pico is gregarious. All he needs is his summer dole of iron and he is surrounded by orders of magnitude more friends.
Because he is not expensive to feed, we thought that perhaps Pico could help powerful Australian and US politicians get out of denial and at the same time give regular folks something hopeful to support. Who knows, but with enough iron, like Popeye and his spinach, Pico might turn into SuperPlankton: Defender of Outspoken Climate Scientists?
For a more technical view of the Planktos proposal, we interviewed David Kubiak of Planktos, Inc.
How might large scale infusions of bio-available iron affect marine plankton communities?
First off, we are not talking about "large" infusions of bio-available iron. We are talking are about very dilute infusions over very large areas. Iron only has to be replenished in parts per trillion.
Our projects will add something on the order of 50 tons of iron to seed an ocean surface area of approximately 10,000 square kilometers in diameter. This would be only a few percent of the scale of naturally occurring blooms.
Dust storms have delivered thousands of tons of iron dust to these same areas [blue indicates the zones with greatest potential for increased plankon productivity] many times in the past, provoking huge blooms, and with no observed ill effects. Moreover, once an area is no longer iron-deficient, adding further iron has no biological effect. "Large scale infusions" of iron, in the sense you imply, are therefore inefficient and pointless since most of it simply sinks out of the biologically active surface system and into the abyss.
The really important point here is that we are only proposing "restoration" of phytoplankton to 1980 levels of health and activity as defined by NASA and NOAA scientists. Their studies show a 25% decline in Pacific plankton populations in the last 25 years and a 6~9% die off globally.
Considering that 1980 levels of marine photosynthesis metabolized about 50 gigatons of CO2 annually, the recent shortfall equals nearly 3 gigatons of lost photosynthetic capacity or approximately half of all industrial and automotive emissions each year.
Returning plankton populations to 1980 levels would neutralize about 50% of industrial society's greenhouse gas emissions, and we feel that is about all you can or should ask a single ecosystem to contribute to our self-inflicted climate wars. The rest of the problem must eventually be handled by our own species, changing our basic energy systems and insane consumption patterns.
Are you confident that the outcomes will be positive or at least that any negative impacts will be transitory? The inference I took from your website narrative, that airborne iron deposition at relatively high rates is an ambient condition of past centuries and is a condition which your business would be replicating, seems open to challenge.
In answer to your first query: "Yes".
Based on a century of ocean plankton science and the 10 international experiments on iron fertilization over the last 15 years we are confident that the scale, methods and technologies of the work we are planning will have positive impacts on all fronts, improving water quality, buffering surface water acidity, recharging the marine food chain, and safely sequestering enormous amounts of CO2 to help slow climate change.
Regarding the latter part of your question, it is pretty difficult to respond to an unspecified "challenge". However, addressing what you have specified:
1) The 25% falloff in ocean iron deposition is a NASA documented fact.
2) That Planktos will be mitigating that shortfall by replenishing marine iron micronutrients in concentrations similar to dust storm deposition is also a fact.
3) And the fact that plankton receiving this iron supplement would uptake and utilize it just as they do wind-borne sources is also well established thanks to at least ten international studies of the effect.
After nearly two centuries of observing plankton blooms, mariners and ocean scientists have yet to report any persisting "negative impacts" other than transient drops in other local nutrients (like nitrogen and phosphorous), which naturally time limit these bloom durations to 60~90 days.
Planktos is planning "bonsai replication" of these natural iron seeding events, and will be closely monitoring their brief cycles of development.
With this methodology, the factors most often mentioned as risks of negative or unintended consequences are a.) employing it too near coastal zones afflicted with toxic algal species, and b.) pushing beyond literal "restoration" activities to seed unprecedented concentrations of new plankton growth.
Regarding the first issue, we are fully aware of "red tide" type phenomena and will be planting our "ocean forest" patches well out in the deep oceans, partially to ensure complete coastal safety and partially because our true target areas, the vast iron-deficient HNLC "desolate zones," are in the blue water sea.
HNLC is oceanographer-speak for "High Nutrient, Low Chlorophyll" and indicates zones of very low productivity, i.e., very little plant life or photosynthesis, which is why their water is so brilliantly blue. Indeed the bluest seas are virtual marine deserts and thus our main areas of focus and activity.
The oft-cited potential coastal hazard is in fact pretty illusory, since most coastal waters are iron-replete, and adding further iron therefore produces no new plankton growth at all.
Regarding the second scale limit issue, we actually do not believe there would be significant side effects from pushing plankton populations a few percent above their normal 1980s' levels. That baseline was arbitrarily set by the NASA satellite technology of the period, which offered the first reliable marine productivity census in history.
Since we do know that CO2 concentrations have been steadily increasing for the last hundred years, it is almost certain that the true marine productivity/ocean plankton baseline was actually much higher a half century ago.
However, without reliable data we do not know that for a fact. We therefore feel it safer to subscribe to the precautionary principle and advocate plankton restoration to "known levels of health" and no farther, and draw a bright line there.
As mentioned above, observing this degree of caution and restoring only the most conservative estimate of lost plankton (6%) will still absorb and sequester nearly 3 billion tons of CO2 without entering any new areas of ecological uncertainty.
Iron complexes deposited from all natural Aeolian processes are certainly not going to have the same exact salt compositions and bio-availability characteristics that the Planktos infusions will have.
Not all health store iron supplements or even all organic calf livers have "the same exact salt compositions" either, but they all pretty effectively deliver the goods. And certainly all the "natural Aeolian processes, historically and presently" that you refer to also vary widely among themselves. All that really matters is that the delivery matrix disassociates easily from the iron content and is ecologically inert or benign.
Bioavailability: Once in marine waters, this is largely a function of the iron particles' scale with sub-micron sizes being best.
I am also hoping that you can offer a take-home insight for those who lack advance degrees in the sciences.
The most important insight to communicate is this: the most urgent planetary crisis we currently face is not global warming; it is the widening CO2-induced disaster in the seas. In tandem with the escalating plankton die-off, marine surface waters are now suffering increasingly toxic levels of carbonic acidity. Together these effects are threatening not just vital ecosystems, fisheries and the entire marine food chain, but also the planet's primary oxygen supply.
So as Prof. Harold Hill would say (to those "who lack advanced degrees"), "we got trouble, my friends, trouble right here in every river and city. Trouble with a capital "T" and that rhymes with "C" and that stands for CO2."
Restoring our plankton allies in the seas could help us defuse up to half of that trouble. The rest is up to us.
For other references, Planktos invites TreeHugger readers to see the links, papers and slide show on their website.
=== end interview ====
If you react like we first did to this information, you may be thinking that Planktos' dream will be a feel-good rationalization for utility companys, lobbyists, and traders, who want to maintain "business as usual." This sort of behavior would, of course, be no different than owners of hybrid vehicles feeling they are entitled to drive more miles than those with SUV's or owners of compact fluorescent bulbs feeling they can leave their lights on when out of the room.
Some scenario thinking ended our cynical first appraisal. Link in the overlaying issues of Peak Oil, Peak Natural Gas, collapsing fisheries, severe hurricanes, the possibility of tying carbon trades to the satellite-verified effects, a rapidly approaching tipping point in media awareness of the climate issue, and rapid growth in solar and wind power product sales. Green design has its own momentum.
Finally, we'd like to point out that we are aware of the historic coverage of the ocean "eco-hacking" idea. Universities are making their contributions as well.
If environmental groups can sponsor rainforest protection for carbon sequestration, there's no reason why utility companies can't contemplate the same for plankton. Why not work together? Given the risk of climate change, we think Pico deserves his extra iron.
treehugger slideshows |
Saving the small things that run the planet
Membership costs just £2 per month
Buglife Scotland
Freshwater pearl mussel (Margaritifera margaritifera)
Joel Berglund
Scotland has very special and unique invertebrates. Some species are unique to Scotland and many more rely on the country as their stronghold. Conserving important habitats, sites and endangered species is vital work.
Since the publication of 'A Strategy for Scottish Invertebrate Conservation' in 2009, Buglife has been busy working to implement it.
The strategy was launched in Edinburgh on 20th January 2009 by Michael Russell MSP, Scottish Government Minister for Environment and it represents the first national implementation of the European invertebrate strategy. The vision in the strategy is for a Scotland where invertebrates are valued and conserved for their key role in a healthy environment and for their potential to bring people together to better, understand and appreciate the natural world.
Why invertebrates are important for the Scottish economy
Scottish Fishing Industry
The Scottish fishing industry relies on invertebrates like shrimps, prawns, crabs, lobsters and other shellfish such as mussels and oysters which make an important contribution to the economy of coastal communities.
Catches of Langoustine (Norway Lobster or Scampi) contribute £89.3 million to the Scottish economy each year – more than the combined catches of Cod, Haddock and Monkfish. In the pelagic fisheries our important stocks of cod, herring and haddock depend on invertebrates such as krill and copepods for their food.
Freshwater fisheries for game fish contribute over £112 million annually to the Scottish economy. Aquatic invertebrates like stoneflies and mayflies are an essential source of food for such fish.
Ecological services
Invertebrates provide a number of important 'ecological services'. These services are often overlooked until they are damaged or lost. They are usually impossible to replace. One example is crop pollination. Insects are responsible for the pollination of a variety of crops in Scotland. The most significant is the soft fruit industry with the raspberry crop in Scotland worth £52 million annually. The blackcurrant crop is valued at £8 million; however, the associated processing industry is worth an additional £200 million.
Sewage treatment
Invertebrates play an important role in sewage treatment. One of the simplest but most effective treatments for sewage involves passing the effluent over a bed of stones on which a biofilm of bacteria, fungi and algae grow and process the waste. The biofilm attracts, and is ingested by invertebrates including non-biting midges, moth-flies and worms. Altogether these organisms turn the sewage into clean water and an organic sludge that can be used as fertiliser or fuel.
Healthy soil
Earthworms and other soil invertebrates like springtails benefit agriculture by maintaining and improving the structure and aeration of soil by their constant feeding and burrowing. They break down organic matter such as dead leaves and return essential minerals and organic matter to the soil, enabling renewed crop growth.
Tourism – especially ‘eco-tourism' constitutes an important and increasing element of economic activity in Scotland. Much of this is about history and landscape but it is also about wildlife.
Indirectly, invertebrates are important in underpinning the survival of talismanic animals such as Ospreys and Otters.
But there is also an increasing interest in Scotland’s special invertebrate fauna. Certain iconic species such as the Kentish glory moth (Endromis versicolora), Chequered skipper (Carterocephalus palaemon) and Mountain ringlet (Erebia epiphron) butterflies as well as the many striking dragonflies, beetles and flies of Scotland’s boreal woodlands increasingly attract visitors, not just from within Scotland but also from other parts of the UK and Europe.
You can learn more about the latest discoveries and events in Scottish Invertebrate News, the bi-annual newsletter for people interested in invertebrates in Scotland – whether you are a novice or expert, this is your newsletter.
If you would like to receive a printed copy of 'A Strategy for Scottish Invertebrate Conservation' or you would like more information about the strategy and its implementation then please contact the Scottish Office
Buglife’s work is grant-aided by Scottish Natural Heritage
Related content
Sign up for the Buglife e-newsletter |
Neck Neuropathy and Neuropathic Pain
Neck and arm pain
Neck and arm pain. Yagi Studio/DigitalVision/Getty Images
Neck Neuropathy and Neuropathic Pain
Neuropathic pain, in contrast to nociceptive pain, occurs when nerve fibers become damaged, traumatized and/or dysfunctional. As a result, the signals sent to you by the damaged fibers may seem like they don't make sense.
To best understand neuropathic pain, let's talk briefly about what nerves do. Their job is to relay messages to and from the brain and spinal cord about what's going on (and what to do about it) in tissues, organs, muscles and more.
If you twist your ankle or burn your hand on a hot stove, for example, your nerves go right to work communicating to the brain and spinal cord about what happened; they also receive and carry response signals from the brain, delivering these back to the body tissues that were involved in the inciting incident (I.e., your ankle or your hand.) These response signals are actually impulses to move. In this case, they may prompt you to take your hand away from the stove burner, or to run it under cold water to decrease the pain.
Nociceptive pain is the term used to describe the result of this communication provided by your working nerves.
But when the nerves themselves are injured, you may get neuropathic pain. Injured nerves can become active for no reason -- that is, they may "fire" but not in response to changes that are going on in the tissues, organs or muscles they serve.
Neuropathic pain is generally related to peripheral nerves, i.e. nerves that have branched off from the spinal cord.
The brain and the spinal cord comprise the central nervous system, and the peripheral nerves constitute the peripheral nervous system.
What Does Neuropathic Pain Feel Like?
According to Dr. Zinovy Meyler, an Osteopath from Princeton, New Jersey, neuropathic pain can be very severe. It can be sharp, or feel like an electrical shock, or pins and needles.
It can also manifest as a deep burning sensation or coldness in your limbs, he says. Other symptoms that go along with neuropathic pain include numbness, weakness or altered sensation anywhere along the path the nerve travels.
Neuropathic pain can intensify sensation but it may also decrease sensation, Meyer says. It is caused by a number of things including diabetes, compression of a spinal nerve root (called radiculopathy.) And, he adds, neuropathic pain can be caused by things that damage nerves such as chemotherapy or radiation.
Neuropathic Pain in the Neck
A common type of neck neuropathy is called cervical radiculopathy. Radiculopathy refers to compression of a spinal nerve root where it exits the spinal cord to begin to branch out all over the body. Cervical radiculopathy is often caused by a herniated disc in the neck. It's possible to get lumbar neuropathic pain - i.e., lumbar radiculopathy, as well.
Learn more about cervical radiculopathy:
Gould, H., MD, Phd. Understanding Pain: What It Is, Why It Happens, and How It's Managed. AAN Press. 2007. St. Paul, MN.
Guyton, A., MD, Hall, J., MD. Textbok of Medical Physiology. 11th ed. Elsevier Saunders. 2006. Philadelphia.
Hanline, B., MD. Back Pain Understood: A Cutting-Edge Approach to Healing Your Back. Medicus Press. 2007.
Meyler, Z., D.O. Causes of Neuropathic Pain Video. Spine-Health website. Accessed: Feb 2016.
Continue Reading |
Scientists struggle to define life
Philosophers wrestling with the big questions of life are no longer alone. Now scientists are struggling to define life as they manipulate it, look for it on other planets, and even create it in test tubes.
In June, researchers replaced the genetic identity of one bacterium with that of a second microbe. Other scientists are trying to build life from scratch. NASA scientists are searching for life in space but aren't sure what it will look like. And some futurists are pondering the prospect of robots becoming so human they might be considered a form of life.
So as scientists push the bounds of biology, astronomy and robotics, a big question looms: What exactly is life?
That question is bubbling up from recent advances in lab work.
"We're all sort of thinking that the next origin of life will be in somebody's lab," said David Deamer, a University of California, Santa Cruz, biochemistry professor who is one of the leading experts trying to create life. But ask Deamer what life is, and he responds by saying it's best to describe it, not define it.
Broadly put, scientists like Deamer say life requires a cell with genetic material and the ability to reproduce, turn food into energy, and to evolve through natural selection. But it's not that simple for others seeking a definition.
At NASA's Astrobiology Institute in California, which studies extreme life here and the possibility of it elsewhere, it's far easier to say what life isn't, said institute director Carl Pilcher.
"Right now we may not have the base of knowledge necessary to answer the question, but there are ways we are proceeding," he said.
Last month, the National Academy of Sciences issued a "weird life" report cautioning NASA not to be so focused on water. It told the space agency that "as the search for life in the solar system expands, it is important to know what exactly to search for."
That same report urged NASA to avoid being "fixated on carbon" when it looks for life even though carbon is often called the backbone of life on Earth.
But if carbon isn't a requirement for life, how about silicon? In other words, what about machines?
Ray Kurzweil, a renowned futurist who advises people such as Bill Gates, believes that by 2029 a machine will pass a prime test of artificial intelligence, offering the same kind of answers as a human.
"The key issue as to whether or not a non-biological entity deserves rights really comes down to whether or not it's conscious," Kurzweil said. "Does it have feelings?"
This isn't just a Kurzweil concept.
"A monumental shift could occur if robots continue to be developed to the point where they can at some point reproduce, improve themselves or if they gain artificial intelligence," said a 2006 study commissioned by the British government's science office. That report compared the situation of robots to the emancipation of slaves.
Look for changes in religion, too.
One of the men trying to make life from scratch, Mark Bedau, understands the worries. A philosophy professor from Reed College in Oregon, Bedau is also the chief operating officer of the synthetic biology firm ProtoLife in Venice, Italy.
His team and others are trying to make single-cell organisms from chemical components, creating a genetic system that multiplies and a metabolism that takes in energy from the environment. Scientists say they are close to completing a key first step, creation of a vesicle, or container, for the cell.
"We are doing things which were thought to be the province, in some quarters, of God — like making new forms of life," Bedau said in a phone interview from Venice. "Life is very powerful, and if we can get it to do what we want ... there are all kinds of good things that can be done.
"Playing God is a good thing to do as long as you're doing it responsibly," he said. |
What if Mrs. Roosevelt had run for president?
Updated | Comment | Recommend E-mail | Print |
As Hillary Clinton faces a key test in today's New Hampshire primary, Robin Gerber's new novel, Eleanor vs. Ike, imagines what could have happened if Clinton's political idol — Eleanor Roosevelt — had run for president in 1952.
The idea is not far-fetched, Gerber says. "Eleanor was encouraged to run, starting as early as 1940, but ultimately, she wasn't as courageous as Hillary."
Gerber, 55, a Clinton supporter, is a lifelong Democrat and former union lobbyist. She got the idea for the novel three years ago talking to a group about her 2002 book, Leadership the Eleanor Roosevelt Way.
"Someone asked me if Eleanor had ever made a mistake. It hit me like a lightning bolt. 'Yes, she should have run for president,' I said. I couldn't get the idea out of my head and started researching why she hadn't run."
Eleanor's reasons were both personal and political. In 1948, three years after FDR's death, she publicly doubted that the country was ready to elect a woman president.
And "Eleanor was deeply insecure," Gerber says. "She found all the personal attacks on her very painful. She felt as if she'd been through enough."
As a feminist, Gerber found that frustrating and decided "since she hadn't run, I would just have to do it for her through the novel."
In Eleanor vs. Ike (Avon original paperback, $13.95), out today, Eleanor is drafted as the Democratic candidate after the real-life nominee, Adlai Stevenson, dies of a heart attack at the 1952 convention.
In the novel, Eleanor faces questions of whether a woman could be commander in chief during the Korean War, which the author finds "parallel to the questions Hillary has to overcome. Hillary shares Eleanor's doggedness in fighting for issues," but Bill has more of Eleanor's "empathic charisma."
The novel includes fact, such as the KKK placing a $25,000 bounty on her head. Gerber tried to make it "as historically accurate as possible," even creating a campaign poll based on 1952 public opinion. She used assumptions and evidence about the personal lives of Eleanor and Dwight Eisenhower: Both had affairs; hers included a woman and later, a younger man who was her doctor.
"It's likely she was bisexual," Gerber says. "She was an incredibly open person, capable of making great changes in her thinking."
The novel also imagines an attempted assassination, which Gerber says underscores "politics can change on a dime."
Gerber says Clinton can win, despite placing third in Iowa, if she appeals to hearts, not just minds. Gerber predicts a fall race between Clinton and Republican John McCain: a woman vs. a war hero, as in her novel.
E-mail | Print |
Strong women: Robin Gerber sees parallels between Hillary Clinton and Eleanor Roosevelt (above).
1928 AP photo |
Tech & Science
No Reason to Pandemic
In the Magazine
A poultry vendor has a nap on cages filled with live chickens at the Klong Toey market in Bangkok, Monday 11 July 2005. Several new cases of bird flu (avian influenca) have been detected in at least one province in central Thailand, just two days before the country wanted to declare itself 'birdflu-free' after no fresh occurances for the last three month. Thailand hoped to boost the country's poultry exports in the second half of this year, which seems unlikely now. Udo Weitz/EPA
Dozens of tractors knock down trees in the forests in the outskirts of Chiang Mai, Thailand, clearing land for a new housing development. The construction destroys the natural habitat of short-nosed fruit bats. One of those displaced bats—infected with some form of animal virus—drops a chewed-up apple core in a nearby backyard farm. A pig eats the leftover apple and then mingles with the dozens of other animals on the farm. Eventually, the family slaughters one for dinner, not knowing that the animal carries the bat virus.
Scientists don't know the exact origins of pandemics—such as the 2009 H1N1 swine flu virus, which ultimately killed an estimated 284,500 people worldwide—but many say the scenario above is a good approximation. They also say that the pandemics we've seen so far are a bout of sniffles compared to what might be on the horizon. Jennifer Olsen, the manager of the pandemics arm of the Skoll Global Threats Fund, a nonprofit whose employees served as scientific advisors for the movie Contagion, says a disaster like the one depicted in that film is feasible. "It's not a question of if there will be a global pandemic," Olsen tells Newsweek. "It's a question of when."
Assuming the money comes through, epidemiologists, hospitals, and animal and human health authorities, would have "a single, robust way to see of all of this data in real time," Olsen says—a website or a mobile app, for example. They could spot anomalies and coordinate instant response efforts, stopping outbreaks before they spread.
3.28_NW0413_BirdFlu_09 Workers at South Korea's Incheon International Airport carry out quarantine training to deal with bird flu patients on 02 November 2007. Quarantine inspectors monitor passengers from Thailand using a heat-sensing camera. YNA/EPA
When the Internet spread across the globe the chances of catching a pandemic before it starts began to improve. What experts call "digital disease detection" began in the mid-1990s, when independent websites like Program of Monitoring Emerging Diseases (ProMed) began to cull and collate information from official public health sources. Later, other sites developed more complex tools that scraped the entire Internet for infectious disease information—, for example, began an experiment in 2008 with "Google Flu Trends," which collects data from those impacted by disease via their search queries. These days, flu-tracking sites and apps like "Flu Near You" attempt to improve on this model by reaching out directly to people, enabling them to self-report flu-related symptoms.
That debacle prompted the WHO to change its International Health Regulations, and add new provisions requiring broader and timelier reporting of outbreaks of international concern. These changes took effect in 2007. Speed wasn't the only concern; the WHO also called for public health officials to more closely monitor ecological factors and animal health. That's because about 75 percent of recently emerging infectious human diseases are zoonotic, or of animal origin, as are 60 percent of all human pathogens, according to the Centers for Disease Control and Prevention (CDC). When it comes to understanding the spread of infectious diseases in humans, UCLA associate professor James Lloyd-Smith says, "It's clear, animals are the place to look."
The initiative is not without its challenges. For example: how to verify the accuracy of the information generated. "How do you separate the wheat from the chaff?" asks Lloyd-Smith. "When you are dealing with that kind of self-reported, high-volume data, how do you filter real-time information, identify what's really an incident you need to respond to versus what's a normal event?"
Deterrents to accurate reporting and participation include privacy concerns, fear of repercussions, and a lack of urgency or understanding around infectious diseases. "People may not initially be so eager to share this sensitive data," says Sajda. "There are a lot of questions: Will it increase my insurance? Will the government come and take my other cows?"
The initiative is considering incentives for participation, such as honorary badges or a "fast pass" to the overburdened local hospital. It has already received endorsement from the monks and temple, which are powerful shapers of public opinion. "Our goal is to create trust," Sajda tells Newsweek. "And a very deep public awareness: If you want good public health, which concerns all of us, you need to share data."
As for privacy, the initiative would limit the sensitive, personally identifying data it collects, such as names and addresses. "We're not trying to show an individual as patient zero in case of an epidemic or pandemic," she says.
The Chiang Mai group still hasn't answered questions about who will have access to the data, how protected it will be, how long it will be retained and how it will be used. It also has not fully developed its technology solutions for reporting and analyzing data. And yet, there is excitement concerning the approach. "It's possibly less accurate but far faster, cheaper and a very, very powerful accelerator of the public health systems already in place," says Robert Kirkpatrick, director of the U.N.'s Global Pulse (an innovation lab using big data to aid development) and who wasn't involved with the Chiang Mai initiative.
The team in Chiang Mai believes that the approach of the future will take advantage of where people already are: on their smartphones. "When you have all of these people wasting so much time looking at a little screen," says Sajda. "How can you capture a fraction of that to support public health?"
The answer could end global pandemics.
Join the Discussion |
Hydration To The Rescue
Athletes, people trying to lose weight, and even the general public are instructed to ensure high levels of hydration through out the day. Ingesting non-diuretic and non-damaging (ie alcohol) fluids help the body to maintain normal function of the organs, encourages homeostasis within all systems of the body, and keep the physiological stress of the tissues at a lower level. Fluids provide a sensation of fullness (one of the reasons they benefit weight loss), and also encourage mental alertness. People who struggle to get through the day (headaches, hunger pangs, fatigue) are often alleviated by increasing the intake of fluids. Thirst has been said to be a poor indicator of hydration because by the time the thirst mechanism is activated, it is too late. Urine color has shown to be a better indicator where lighter urine color (lemonade-like) is ideal versus a darker color (apple juice color). Note that some vitamins and foods will have an effect on color regardless of exertion and fluid intake.
Hydration’s Effect On Performance
Researchers from the U.S. Army Research Institute of Environmental Medicine reveal interesting findings relating to hydration, and performance in the heat as well as the cold. Common knowledge and personal experience has shown many people that activities performed in the heat are harder to accomplish than when the temperature is cooler. This experiment confirms this observation but also notes that when all other factors were held constant, a drop in temperature alone did not have a negative effect on performance. The bulk of the study examined the body’s response to hot, cold, fluid retention, and performance in each of those environments.
Not surprisingly, the researchers found that as it got hotter and the more dehydration set in, there was a worsening of performance. The subjects felt as though they were performing at the same level, but they were actually not able to produce nearly as much as power as they had previously done. Fluid absorption and intake will help prevent weight loss and the loss of essential substances of the body. Losing as little as 3% body weight (which is often and easily done by marathoners) has a dramatic effect on performance when the activity is being done in the heat. This research is important to athletes who compete in the heat, as well as people who are physically active while outdoors. Performance in this study was sport related, but it shows that the body is affected so it should serve as fair warning to construction workers, gardeners, soldiers, lifeguards, and any other individual who must be physically active and spends time in the heat. The hotter it got, the more markedly the negative effects were.
While colder temperatures had less of an effect on performance based on the “fluid status” of the person, there seemed to be a way to keep fluids available for use even though the individual could not consume them. Participants consumed a glycerol substance and were found to be hyper-hydrated after they were exposed to cold air temperature. The glycerol is a sweet syrup like substance that reacts in such a manner that allows the body to “better preserve the extravascular fluid volume, accounting for the improved TBW (total body water), compared with water alone. This extravascular ‘reserve’ could later be called on during exercise or heat stress, when hydration becomes important to performance and thermoregulation,” the paper noted. The next step will be to examine how this finding will translate to performance while under these cold temperatures.
We see that the body needs proper fluid balance in order to perform at its peak. Dehydration will have negative effects on performance, especially as it get warmer. Dehydration will have marginal affect on performance in the cold, but it is possible to increase the level of available fluid by ingesting a glycerol solution. As with anything ingested, fluid consumption should not be done to either extreme. Find balance, and find yourself performing better.
|
Rollover text informationAmerican Experience Logo
Public Enemy #1
spacer above content
People & Events: The Era of Gangster Films, 1930-1935
%During the Great Depression, casting gangsters as heroes created a new film genre that symbolized the decay of American society, as well as the fear that traditional values would not survive the economic crisis. These new crime films were different from the morality tales of the silent era's crime genre. Their ethnic characters, pulling themselves up by the bootstraps, were the new archetypal Americans.
The first film in this new genre, Little Caesar, depicted the rise of a small-town mobster to the upper echelons of organized crime. Appearing in 1930, it starred Edward G. Robinson as Caesar Enrico Bandello. Unlike earlier gangsters, Bandello lives and dies unrepentant of his crimes. The movie was so successful that Hollywood made more than 50 gangster movies the following year.
The most violent movie of 1932 was Scarface, starring Paul Muni as Tony Camonte, a Chicago mobster. The street battles were fictionalized episodes that clearly depicted the life of the notorious Al Capone. When he viewed a few scenes during filming, Capone was impressed with their authenticity.
Because the film depicted 43 murders, the Motion Picture Production Code refused to give its seal of approval to Scarface. The film was released anyway, with a few changes and the title Scarface: Shame of a Nation. Local censors cut many scenes. The film was also denounced as defamatory by the Order of the Sons of Italy in America.
In the character Tommy Powers, the 1931 gangster flick The Public Enemy presented an all-American anti-hero in the tradition of Tom Sawyer. In this story of Prohibition, James Cagney portrays an Irish American mobster who hates authority and finds respectability stifling. He treats his girlfriends badly, but remains loyal to his mother -- and to his male associates.
%Following the conventions of the era, Powers dies at the end. However, the fact that he is killed by rival gangsters instead of the police symbolized the deterioration of law enforcement and the government. Warner Brothers understood how controversial this ending was, and included the following message at the end: "The end of Tom Powers is the end of every hoodlum. The Public Enemy is not a man, it is not a character, it is a problem we must all face."
In 1933 the National Committee for the Study of Social Values published a study on crime. One of the findings claimed that gangster movies had given convicted criminals their early education. Roman Catholic bishops, a Catholic lay organization called the Legion of Decency, and the International Association of Chiefs of Police all pressured Hollywood to end movie violence. To prevent government censorship, the Motion Picture Producers and Directors Association agreed to enforce its own Production Code. The Code had existed since 1930, but the studios usually ignored it.
The Code's preamble stated, "crime will be shown to be wrong and that the criminal life will be loathed and that the law will at all times prevail." Villains could not be protagonists, and at the end, they had to be dead or in jail. Because gangster films were Hollywood's most profitable movies, the studios were faced with a dilemma.
In 1934 public enemies John Dillinger, Pretty Boy Floyd, and Baby Face Nelson were killed by FBI special agents (slangily called "G-men"), who became an overnight sensation. They were young, college educated, and dynamic. The special agents personified the success of Roosevelt's New Deal, and the revitalization of the country. Showing their success on film seemed a natural substitute for the gangster movie.
G-Men, the first film about the FBI, came out in 1935. It starred James Cagney as Special Agent Brick Davis. He essentially has the same characteristics as Tommy Powers, but he is on the "right side" of the law. The three villains are thinly disguised versions of Dillinger, Floyd, and Nelson. To give the film a documentary-like quality, G-Men shows pictures of the Justice Department building, microscopic shots of bullets and fingerprints, and the FBI firing ranges.
Even though Warner Brothers was presenting G-Men as an official history of the FBI, the agency in no way contributed to it. After its release, the FBI received lots of fan mail regarding the picture. Director J. Edgar Hoover issued a form letter response stating, "This Bureau did not cooperate in the production of G-Men, or in any way endorse this motion picture."
Hollywood followed the success of G-Men with six more FBI pictures. In September 1935, the studios were forced to stop when the British Board of Censors complained that the new FBI films were just as violent as the gangster films. Not wishing to lose its British market, Hollywood entirely deleted gangster characters from its movies for the next several years.
previous | return to people & events
Site Navigation
Gallery | Featured Letters | Teacher's Guide
© New content 1999-2001 PBS Online / WGBH
Exclusive Corporate Funding is provided by: |
FDA Ready to Approve Frankenfish Despite Fishy Science
What Is AquAdvantage Salmon?
How Are GE Animals Regulated?
The next question one might ask is how the federal government goes about deciding whether or not a GE animal should be allowed in our food supply. Under a 1980s decision, written by anti-regulation ideologues in the Reagan and Bush I era before any GE foods -- plants or animals -- were ready for commercialization, the government decided that no new laws were needed to regulate GE plants or animals. This decision is called "the Coordinated Framework for the Regulation of Biotechnology," or the Coordinated Framework for short.
Whereas the European Union debated the regulation of GE foods and drafted new laws to address issues such as safety, traceability, allergenicity and environmental impacts, the U.S. never passed any new laws specific to GE animals or had the kind of public debate passing a new law would require. Instead, they decided to regulate GE animals as "animal drugs," using laws that are not well-suited to the unique, complex issues posed by GE animals. Specifically, the government considers the extra DNA added to the GE animals as the "animal drug." New Animal Drugs are regulated by the Food and Drug Administration, and they receive input from the Veterinary Medicine Advisory Committee (VMAC) -- a committee mostly made up of veterinarians, not genetic engineering experts.
2010: Sloppy Science on Trial
Back in 2010, the FDA took the first steps to approve AquaBounty's application to produce the GE salmon. It released a draft Environmental Assessment (EA) and several hundred pages of safety testing data from experiments performed by AquaBounty on the GE salmon. Then it gave the public a mere two weeks to comment on the data, and it convened VMAC to advise it on the GE salmon.
For watchdog groups, this was the first signal that something was, well, fishy. Consumers Union senior scientist Michael Hansen excoriated the safety data as "sloppy," "misleading," and "woefully inadequate." In addition to using small sample sizes and culling deformed fish and thus skewing the data, AquaBounty only provided data gathered in its Prince Edward Island facility, where it will produce GE salmon eggs. But, by law, it must also provide data from its Panama facility, where it will grow the salmon to full size.
The VMAC committee didn't give a resounding approval either. The New York Times summarized their findings, saying, "While a genetically engineered salmon is almost certainly safe to eat, the government should pursue a more rigorous analysis of the fish's possible health effects and environmental impact." However, the committee only advises the FDA, and its decisions are not binding.
2012: FDA Readies Its Rubber Stamp
Following the VMAC meeting and a second, public meeting on labeling issues surrounding the GE salmon, the FDA went silent. Over the next two years, it quietly examined the public comments and the input from the VMAC committee. But despite VMAC's suggestion for a more rigorous analysis, the FDA moved the application one step closer to approval without really addressing the gaping holes in the AquaBounty's science. In May of 2012, it produced an ever-so-slightly improved Environmental Assessment (EA) compared to the original draft it made public in 2010, and a preliminary "Finding of No Significant Impact" (FONSI).
Donate to CMD!
What happened between 2010 and 2012 to improve our confidence that the GE salmon is safe for human consumption and for the environment? Nothing. There's no new data whatsoever. The only changes are a few minor additions to the EA -- certainly not enough to inspire confidence that the government heard the critiques made in 2010 and addressed them.
What's Fishy About the GE Salmon?
"There are still unanswered safety and nutritional questions and the quality of the data that was submitted to the FDA was the worst stuff I've ever seen submitted for a GMO. There's stuff there that couldn't make it through a high school science class," reflected Hansen in early 2013, after reviewing the newly released documents.
Almost laughing, Hansen remarks on the obvious flaws in AquaBounty's scientific justification of the fish's safety, saying, "That's not science, that's a joke!" Becoming more serious, he adds, "There's allergenicity questions and other health questions. It shouldn't be approved. They don't have any data to show that it's safe."
"There are environmental issues as well," Hansen continues. "Not so much in Panama, but on Prince Edward Island. That's where they're going to producing eggs. Guess what, to produce eggs, you've gotta have fertile adults." Before the wild Atlantic salmon population was decimated, the waters around Prince Edward Island was salmon habitat. An escaped GE fish could easily thrive there. "Yet," adds Hansen, "they conclude that even if they get out the water's too cold!"
She submitted comments back in 2010, but says, "it looks like either they didn't read our comments or they just decided to ignore them." She points out that the Panama facility that AquaBounty will use to grow the fish out of is not large enough for a commercial venture. "It's at a scale to show proof of concept of the commercial viability of this," she said. "Once the company scales up to selling millions and millions of eggs, the fish will be farmed by producers with all kinds of facilities." Those facilities might not be as well protected as the one in Panama. Unless the FDA brings its risk assessment methods up to date, we have no adequate, scientific assurance that GE salmon won't escape into the wild.
Next Steps
Despite the scientific questions, there's no sign that the FDA will overturn its preliminary decision to allow commercialization of the GE salmon. The best Michael Hansen is hoping for at this point is a requirement that the GE salmon -- one legalized -- will be labeled as genetically engineered. He calls this "an outside chance."
Unfortunately, as Hansen notes, "This is to set a precedent. If they let the GE salmon go through, why would any other company that wants to get a genetically engineered animal through bother" producing rigorous, scientifically valid data to prove its product's safety?
If you don't want to see GE salmon in your local supermarket in as little as a few years, you can take action. The FDA is accepting public comments until April 26, 2013. You can write your own message and submit it at Regulations.gov or you can join Food and Water Watch's campaign against the GE salmon here. You can also write your representatives to let them know your point of view.
I have taken advantage of every opportunity that has come my way to object to the genetic engineering of wild or farm raised salmon. The idea is appalling and the possibility that, if it IS done, it may not require labeling is criminal. People have a right to know what is being offered for their consumption and whether or not. it contains something to which they are allergic. I am flat out opposed to any GE food that has not been proven to be absolutely safe for human consumption and labeled as to what it contains. There have been so many non-native species "escape" into the wild and do irreparable damage to our eco-system that the very idea of this type of "frankenfish" is just too horrible. It will only be good for the greed and bottom line of the industry,not for the world population or the environment. PLEASE, don't allow this to happen.
Can someone please tell me why the FDA does nothing about our poisoned food supply and packaging?? Are they on a permanent vacation ?
We only have one solution left to us........if they will not tell us which salmon is GE I will stop eating all salmon.
me too- i nearly have already- done with it now. I will not even ingest "fish oil" .
Sloppy science disturbs me deeply, and this process reeks of sloppy scientific oversight by the FDA. I would encourage the FDA to reconsider their decision and apply a complete peer-reviewed process using top geneticists and othe allied science professionals.
We don't fully understand how our bodies will react to this experimental train wreck they prepare to perpetrate on us. While a thing might look right at certain Micro-levels of inspection, we must not forget that evolution took millions of years fine tuning the enzymes and other digestive properties of our systems to handle natural whole foods a given way. It works a specific way on specific things, a near copy isn't always good enough (Ask any chemist if "close" is good enough). Look at how many people can't handle the new strains of wheat and gluten(more people have grown intolerant of it in the recent years) it's almost identical .. but not quite. Bottom line while I stipulate to the problems of feeding an ever growing population as critical. We must be exceedingly careful as to how we do it. Lest we be the engines of our own extinction.
If these invade the oceans it will be the end of wild salmon and who knows what else. They may only raise them on land but some farm raised fish are now being raised in great nets in the open ocean. Remember the Invasive Lake Trout that were intentionally released into the wild and what it has done to the bears? These people are stupid, selfish, greedy and dangerous. It's sad because these scientists creating these things have all this education. They could be saving the world instead of destroying it. They must think they are really smart, but they don't have the common sense to look ten steps out and see the consequences of their actions. They have tiny little eyes that they can only focus on their little microscopes with which they play "God" . They are pathetic. Do they know or care that so few people respect them or don't their brains think that far afield? |
Math: What counts in the early years
Parents teaching their kids basic math before school is nothing new. What is different is we now know how incredibly important it is.Photo by Digital Vision
“First, we learned to recognize numbers: speed signs on the road or pointing out people with numbers on their sports jerseys,” she recalls of his preschool days. Next came counting sets of objects. “I tried to count things he was into, like cars and airplanes.”
“A child's level of math skills on their first day in kindergarten predicts their mathematical ability in grade five,” says Solomon, who, in addition to being Xavier’s mother, is a developmental psychologist and research scientist at The Hospital for Sick Children (SickKids) in Toronto. “The literature shows a kid’s incoming number knowledge is a forecast for how well they will be doing in math as much as seven years down the road.”
The good news is Solomon’s ‘number talk’ with Xavier during the preschool years works. Research shows parents can easily increase their child’s math knowledge before school begins and set them up for success in the future.
What is 'number talk'?
“Number talk is any talk about numbers,” says Susan Levine, professor of psychology at the University of Chicago, who studies the relation between how much number talk goes on in the house and how well kids do in math on the first day of school. “We told the parents to do what they ordinarily do with their kids, recorded it, then went back to count how much number talk was going on. Parents who do this more have kids who score better in math.”
But Levine and her colleagues also found that some types of number talk proved to be better indicators for future understanding than others. “Kids can rattle off their numbers early, often from 1 to 10, and parents are surprised and impressed. But it’s a list with no meaning. When you say ‘give me 3 fish’ they give you a handful.” Gradually, kids figure out the meaning when objects counted are labeled by the parents. In essence, pointing to the objects while counting them and noting how many there are when the counting is done. “If you count 4 trucks, they are all 4 but it’s the fourth truck that carries the meaning.”
How kids learn to count
- One-to-one correspondence. The child learns each number can only correspond to one object in a set of things counted. For example, when counting, each truck has its own number: the child can’t skip a truck or count the same truck twice.
One last skill worth noting here involves knowing a number of objects just by looking. If you show a toddler 3 things and say ‘how many?’ they can recognize 3 without counting. Levine believes this ability, called subitizing, provides another clue as to why some types of number talk are more effective at prepping kids for future math skills than others.
“Most parents use low numbers when counting with their kids, but when kids see 3 bears they don’t have to count to understand. They can say ’3’,” says Levine. But the ability to subitize quickly diminishes with larger numbers of objects. “When parents extend the number talk beyond 3 up to 10, their kids have a better understanding of cardinality. We think this is because the somewhat higher numbers provide the opportunity to make the link and understand ‘this is why I’m counting’”.
Getting kids to make these links need not be a chore. “Parents can do fun, simple things around the house,” says Levine. “You don’t have to drill them.” In fact, by putting in a fun effort early, parents may be avoiding more unpleasant efforts down the road helping their kids catch up in math, which can be very difficult.
Simple ideas to teach math to preschoolers
- Count objects that are in front of the child and label the set size: “Let's count your dolls. 1, 2, 3, 4. You have 4 dolls.” Point at each object as it is counted and encourage the child to do the same. Counting something is better than just counting.
- Line up two sets of things: 3 trucks and 2 cars. Then count each set while pointing to each member of the corresponding pair: “1” (point to a car and a truck that are side by side); “2” (point to a car and a truck that are side by side); "there are 2 cars”; and “3” (pointing to the one extra truck that is not paired with a car); “there are 3 trucks. There are 2 cars and 3 trucks.” This helps kids learn one-to-one correspondence.
- Parents should find contexts in their daily routines when they can talk about numbers with their children. For example, “tonight there will be five people at home, so we need to put five plates on the table. Let's count them: 1-2-3-4-5. We have 5 plates.” This lets kids know people do math all the time.
- When you are walking in the neighbourhood, count the number of red cars you see, “1 red car, 2 red cars, 3 red cars -- today we saw 3 red cars.” This helps give kids a clue to the fact that things can be categorized, and therefore counted, in different ways.
- Counting can come up when you need to share. “We have 4 cookies and 2 children -- let's give 1 to you, and 1 to your friend, another 1 to you and another 1 to your friend. Let's count how many each of you have -- 1, 2 -- you have 2. 1, 2 -- you have 2. Each of you has 2 cookies!” This gives kids a clue to the fact that the same items can be counted in different ways.
- Introduce basic calculation. “You have 2 trucks. If I give you 1 more, you will have...?” (Wait for child to answer, or supply an answer if the child doesn't know: "Now you have 3 trucks.”) Subtraction: “You have 3 trucks -- if you give one to me you will have...?" (wait or supply an answer).
Where is all this heading? Fractions!
Typically, the objects parents count with a preschooler are considered ‘whole entities’: trucks, oranges, coins, and blocks. But the reality is these objects are all potentially made up of smaller units or could be part of larger units.
“The basic idea behind fractions is that quantity is continuous and that a ‘unit’ can be just about any size,” says Solomon. “Imagine a ruler. It’s not the marks on a ruler that represent the number 1, 2 and so on. It’s the space between the marks that represent the quantity.”
The idea is not to try to get your child to fully understand this concept, but rather to insert little clues into your child’s head about the concept which may make the “ah ha” moment about fractions down the road easier to achieve. Solomon suggests counting objects that are more easily variable in terms of their unit size.
Vancouver Flyers
|
criminal law
the branch of law that deals with disputes or actions involving criminal penalties (as opposed to civil law). It regulates the conduct of individuals, defines crimes, and provides punishment for criminal acts
the individual or organization that brings a complaint to court
the individual or organization charged with a complaint in court
civil law
a system of jurisprudence, including private law and governmental actions, for settling disputes that do not involve criminal penalties
prior cases whose principles are used by judges as the bases for their decisions in present cases
stare decisis
Literally, "let the decision stand." The doctrine whereby a previous decision by a court applies as a precedent in similar cases until that decision is overruled
public law
cases involving the action of public agencies or officials
trial court
the first court to hear a ciminal or civil case
court of appeals
a court that hears the appeals of trial-court decisions
supreme court
the highest court in a particular state or in the United States. This court primarily serves an appellate function
due process
the guarentee that no citizen may be subjected to arbitrart action by national or state government
writ of habeas corpus
a court order demanding that an individual in custody by brought to court and shown the cause for detention. Habeas corpus os guarenteed by the constitution and can be suspected only in cases of rebellion and invasion
cheif justice
the justice on the supreme court who presides over the court's public session
senatorial courtesy
the practice whereby the president, before fornally nominating a person for a federal judgeship, finds out whether the senators from that state support the nomination
judicial review
the power of the courts to declare actions of the legislative and executive branch invalid or unconstitutional. The Supreme Court asserted this power in Marbury v. Madison (1803)
Supremacy Clause
A clause of Article VI of the constitution that states that all laws passed by the national government and all treaties are the supreme laws of the land and superior to all laws adopted by any state or any subdivision
the right of an individual or an organization to initiate a court case
a criterion used by courts to avoid hearing cases that no longer require a resolution
writ of certiorari
a formal request by an appellant to have the Supreme Court review a decision of a lower court. Certiorari is from a Latin word meaning "to make more certain"
amicus curiae
"Friend of the court," an individual or group who is no party to a lawsuit but seeks to assist the court in reaching a decision by presenting an additional brief
written documents in which attorneys explain - using case precedents - why the court should rule in favor of their client
oral argument
the stage in Supreme Court proceedings in which attorneys for both sides appear before the court to present their positions and answer questions posed by the justices
the written explanation of the Supreme Court's decision in a particular case
regular concurrence
a concurring opinion that agrees with the outcome and the majority's rationale but highlights a specific legal point
special concurrence
a concurring opinion that agrees with the outcome but disagrees with the rationale presented by the majority opinion
dissenting opinion
a decision written by a justice who voted with the majority opinion in a particular case, in which the justice fully explains the reasoning behind his or her opinion
judicial restraint
the judicial philosophy whereby its adherents refuse to go beyonf the text of the constitution in interpreting its meaning
judicial activism
the judicial philosophy that posits that the court should see beyond the text of the constitution or a statute to consider broader societal implications for its decision
rule of four
the rule that certiorari will be granted only if four justices vote in favor of the petition
class action suit
a lawsuit in which a large number of persons with common interests joing together under a representative party to bring or defend a lawsuit, as when hundreds of workers join together to sue a company
dispute resolution
court serves as a venus in which the facts of a case are established, punishment is meted out to violaters, and compensation is awarded to victims (After-the-Fact)
anticipation of legal consequences allows private parties to form rational expectations and thereby coordinate their actions in advance of possible disputes (Before-the-Fact)
rule interpretation
courts must interpret statutes for relevance and use in cases and decisions
solicitor general
3rd in status in the Department of Justice; top government lawyer om virtually all cases before apellate courts in which the government is a party; screens cases before the supreme court
law clerk
each justice has 4 law clerks to research legal issues and assist with the preparation of opinions and screen petitions for writs of certiorari
Please allow access to your computer’s microphone to use Voice Recording.
Having trouble? Click here for help.
We can’t access your microphone!
Click the icon above to update your browser permissions and try again
Reload the page to try again!
Press Cmd-0 to reset your zoom
Press Ctrl-0 to reset your zoom
Please upgrade Flash or install Chrome
to use Voice Recording.
For more help, see our troubleshooting page.
Your microphone is muted
For help fixing this issue, see this FAQ.
Star this term
You can study starred terms together
Voice Recording |
Rio OlympicsOlympic rings logoCompetition Winner!
the world's premier FREE website for learners + teachers of English
the movers and shakers
This page is about the idiom the movers and shakers.
Meaning: You can say people are the movers and shakers in a place or a situation if they are the ones with the power to make decisions.
For example:
• Most of the movers and shakers in the movie business live in Los Angeles.
• If you want to be a literary agent, you'll need to know some of the movers and shakers in the publishing business.
Quick Quiz:
The movers and shakers in the world of banking
a. transport equipment to new banks
b. work behind the counters in banks
c. control banks and other financial institutes
Idiom of the Day
This entry is in the following categories:
Contributor: Matt Errey |
Manpage of ADJTIMEX
Section: Linux Programmer's Manual (2)
Updated: 2016-03-15
adjtimex, ntp_adjtime - tune kernel clock
#include <sys/timex.h>int adjtimex(struct timex *buf);int ntp_adjtime(struct timex *buf);
Linux uses David L. Mills' clock adjustment algorithm (see RFC 5905). The system call adjtimex() reads and optionally sets adjustment parameters for this algorithm. It takes a pointer to a timexstructure, updates kernel parameters from (selected) field values, and returns the same structure updated with the current kernel values. This structure is declared as follows:
struct timex {
int modes; /* Mode selector */
long offset; /* Time offset; nanoseconds, if STA_NANO
status flag is set, otherwise
microseconds */
long maxerror; /* Maximum error (microseconds) */
long esterror; /* Estimated error (microseconds) */
int status; /* Clock command/status */
long precision; /* Clock precision
(microseconds, read-only) */
see NOTES for units */
struct timeval time;
/* Current time (read-only, except for
ADJ_SETOFFSET); upon return, time.tv_usec
contains nanoseconds, if STA_NANO status
flag is set, otherwise microseconds */
long tick; /* Microseconds between clock ticks */
long ppsfreq; /* PPS (pulse per second) frequency
(read-only); see NOTES for units */
STA_NANO status flag is set, otherwise
microseconds */
int shift; /* PPS interval duration
(seconds, read-only) */
see NOTES for units */
long jitcnt; /* PPS count of jitter limit exceeded
events (read-only) */
long calcnt; /* PPS count of calibration intervals
(read-only) */
long errcnt; /* PPS count of calibration errors
(read-only) */
long stbcnt; /* PPS count of stability limit exceeded
events (read-only) */
operation (seconds, read-only,
since Linux 2.6.26) */
/* Further padding bytes to allow for future expansion */
The modesfield determines which parameters, if any, to set. (As described later in this page, the constants used for ntp_adjtime() are equivalent but differently named.) It is a bit mask containing a bitwise-orcombination of zero or more of the following bits:
Set time offset from buf.offset. Since Linux 2.6.26, the supplied value is clamped to the range (-0.5s, +0.5s). In older kernels, an EINVALerror occurs if the supplied value is out of range.
Set maximum time error from buf.maxerror.
Set estimated time error from buf.esterror.
Set PLL time constant from buf.constant. If the STA_NANOstatus flag (see below) is clear, the kernel adds 4 to this value.
ADJ_SETOFFSET (since Linux 2.6.29)
Add buf.timeto the current time. If buf.statusincludes the ADJ_NANOflag, then buf.time.tv_usecis interpreted as a nanosecond value; otherwise it is interpreted as microseconds.
ADJ_MICRO (since Linux 2.6.36)
Select microsecond resolution.
ADJ_NANO (since Linux 2.6.36)
Select nanosecond resolution. Only one of ADJ_MICROand ADJ_NANOshould be specified.
ADJ_TAI (since Linux 2.6.26)
Set TAI (Atomic International Time) offset from buf.constant.
ADJ_TAIshould not be used in conjunction with ADJ_TIMECONST, since the latter mode also employs the buf.constantfield.
Set tick value from buf.tick.
ADJ_OFFSET_SS_READ (functional since Linux 2.6.28)
Return (in buf.offset) the remaining amount of time to be adjusted after an earlier ADJ_OFFSET_SINGLESHOToperation. This feature was added in Linux 2.6.24, but did not work correctly until Linux 2.6.28.
Ordinary users are restricted to a value of either 0 or ADJ_OFFSET_SS_READfor modes. Only the superuser may set any parameters.
STA_PLL (read-write)
Enable phase-locked loop (PLL) updates via ADJ_OFFSET.
STA_PPSFREQ (read-write)
Enable PPS (pulse-per-second) frequency discipline.
STA_PPSTIME (read-write)
Enable PPS time discipline.
STA_FLL (read-write)
Select frequency-locked loop (FLL) mode.
STA_INS (read-write)
STA_DEL (read-write)
STA_UNSYNC (read-write)
Clock unsynchronized.
STA_FREQHOLD (read-write)
Hold frequency. Normally adjustments made via ADJ_OFFSETresult in dampened frequency adjustments also being made. So a single call corrects the current offset, but as offsets in the same direction are made repeatedly, the small frequency adjustments will accumulate to fix the long-term skew.
This flag prevents the small frequency adjustment from being made when correcting for an ADJ_OFFSETvalue.
STA_PPSSIGNAL (read-only)
STA_PPSJITTER (read-only)
PPS signal jitter exceeded.
STA_PPSWANDER (read-only)
PPS signal wander exceeded.
STA_PPSERROR (read-only)
PPS signal calibration error.
STA_CLOCKERR (read-only)
Clock hardware fault.
STA_NANO (read-only; since Linux 2.6.26)
STA_MODE (since Linux 2.6.26)
STA_CLK (read-only; since Linux 2.6.26)
Attempts to set read-only statusbits are silently ignored.
ntp_adjtime ()
MOD_CLKBis the synonym for ADJ_TICK.
Clock synchronized, no leap second adjustment pending.
Insertion of a leap second is in progress.
A leap-second insertion or deletion has been completed. This value will be returned until the next ADJ_STATUSoperation clears the STA_INSand STA_DELflags.
STA_PPSSIGNALis clear and either STA_PPSFREQor STA_PPSTIMEis set.
The symbolic name TIME_BADis a synonym for TIME_ERROR, provided for backward compatibility.
On failure, these calls return -1 and set errno.
bufdoes not point to writable memory.
EINVAL (kernels before Linux 2.6.26)
An attempt was made to set buf.freqto a value outside the range (-33554432, +33554432).
EINVAL (kernels before Linux 2.6.26)
An attempt was made to set buf.statusto a value other than those listed above.
An attempt was made to set buf.tickto a value outside the range 900000/HZto 1100000/HZ, where HZis the system timer interrupt frequency.
buf.modesis neither 0 nor ADJ_OFFSET_SS_READ, and the caller does not have sufficient privilege. Under Linux, the CAP_SYS_TIMEcapability is required.
The leap-second processing triggered by STA_INSand STA_DELis done by the kernel in timer context Thus, it will take one tick into the second for the leap second to be inserted or deleted.
ntp_adjtime() Thread safetyMT-Safe
Neither of these interfaces is described in POSIX.1
The preferred API for the NTP daemon is ntp_adjtime(3).
NTP "Kernel Application Program Interface"
ntp_adjtime ()
This document was created by man2html, using the manual pages. |
Cookies policy
This Week
5 October 2011
Rules of death: When the end is not what it seems
The life-saving treatment of therapeutic hypothermia is calling into question the guidelines doctors use to determine brain death
In the balance
IT’S a nightmarish scenario: a 55-year-old man, pronounced dead after a cardiac arrest, is minutes away from organ donation when he begins to show signs of life. “On being moved to the operating room table, the anaesthetist noticed that he was coughing,” says neurologist Adam Webb of Emory University School of Medicine in Atlanta, Georgia, who initially pronounced the man brain dead.
It transpired that the man had also regained corneal reflexes and was breathing – both signs of a functioning brainstem. Although the man later died, his case has reignited a debate about whether clearer guidelines are needed to determine brain death (Critical Care Medicine, DOI: 10.1097/CCM.0b013e3182186687).
At issue is a treatment called therapeutic hypothermia, which Webb’s patient had. It involves cooling the body to about 33 °C to minimise damage to tissues and brain cells caused by oxygen deprivation after a cardiac arrest.
Since the publication of two landmark papers in 2002 in The New England Journal of Medicine, increasing numbers of hospitals are using therapeutic hypothermia. It saves lives, but the technique muddies the waters when it comes to determining brain death. It is also making it harder to predict who is likely to recover from a coma.
Cardiac arrest is a leading cause of death in western countries, affecting around 295,000 people each year in the US alone. Just 15 per cent of people survive as far as hospital and of these around 80 per cent will fall into a coma. Only a third regain consciousness, and they may have brain damage.
Doctors use a number of standard indicators to help them with the prognosis ...
To continue reading this premium article, subscribe for unlimited access.
Quarterly by Direct Debit
Inclusive of applicable taxes (VAT) |
Aim higher, reach further.
Are Some Teenagers Wired for Addiction?
“The behavior might look the same but there may be different brain regions contributing to that behavior,” says neuroimaging expert Dr. Robert Whelan at the University of Vermont, who was the study’s lead author. The study is part of a larger project funded by the European Union that is conducting a systematic neural, genetic and behavioral assessment of teenagers in Ireland, England, France, and Germany.
Generally, the researchers found that the adolescents with ADHD symptoms, which is the most common neurodevelopmental psychiatric disorder, and those who had used drugs or alcohol had an equally hard time handling the task.
“Our study lends credence to the idea that ADHD and substance abuse are not intrinsically linked together,” Whelan tells the Health Blog. “There appear to be different regions associated with different kinds of impulsivity.”
Show More Archives
Popular on WSJ |
Malaria and Rome
Malaria and Rome : A History of Malaria in Ancient Italy
You save US$0.01
Free delivery worldwide
Dispatched from the UK in 3 business days
When will my order arrive?
Malaria and Rome is the first comprehensive study of malaria in ancient Italy since the research of the distinguished Italian malariologist Angelo Celli in the early twentieth century. It demonstrates the importance of disease patterns and history in understanding the demography of ancient populations. Robert Sallares argues that malaria became increasingly prevalent in Roman times in central Italy as a result of ecological change and alterations to the physical landscape such as deforestation. Making full use of contemporary sources and comparative material from other periods, he shows that malaria had a significant effect on mortality rates in certain regions of Roman Italy. Robert Sallares incorporates all the important advances made in many relevant fields since Celli's time. These include recent geomorphological research on the evolution of the coastal environments of Italy that were notorious for malaria in the past, biomolecular research on the evolution of malaria, ancient DNA as a new source of evidence for malaria in antiquity, the differentiation of mosquito species that permits understanding of the phenomenon of anophelism without malaria (where the climate is optimal for malaria and Anopheles mosquitoes are present, but there is no malaria), and recent medical research on the interactions between malaria and other diseases. The argument develops with a careful interplay between the modern microbiology of the disease and the Greek and Latin literary texts. Both contemporary sources and comparative material from other periods are used to interpret the ancient sources. In addition to the medical and demographic effects on the Roman population, Malaria and Rome considers the social and economic effects of malaria, for example on settlement patterns and on agricultural systems. Robert Sallares also examines the varied human responses to and interpretations of malaria in antiquity, ranging from the attempts at rational understanding made by the Hippocratic authors and Galen to the demons described in the magical papyri.
show more
Product details
• Hardback | 358 pages
• 160.5 x 221.5 x 24.6mm | 616.9g
• Oxford University Press
• Oxford, United Kingdom
• English
• 37 photographs and 21 maps in-text
• 0199248508
• 9780199248506
• 1,801,688
About Research Fellow in Biomolecular Sciences Robert Sallares
Robert Sallares is Research Fellow in Biomolecular Sciences, UMIST
show more
Review quote
This book is beautifully written, the photographs intelligently chosen, and the list of references comprehensive. I found it fascinating and instructive reading. Annals of Tropical Medicine and Parasitology
show more
Table of contents
1. Introduction ; 2. Types of malaria ; 3. Evolution and prehistory of malaria ; 4. The ecology of malaria in Italy ; 5. The demography of malaria ; 6. The Pontine Marshes ; 7. Tuscany ; 8. The city of Rome ; 9. The Roman Campagna ; 10. Apulia ; 11. Geographical contrasts and demographic variation
show more |
Enzymes Make the World Go 'Round
Enzymes are very specific and only work with certain substrates. We have a whole section where we tell you about reactions and the molecules that change in those reactions. Chemical bonds are being created and destroyed over a series of many intermediate reactions. Those changes rarely happen on their own when you look at biological systems.
Assembly Line Robots
You all know about cars and the assembly lines where they are made. There are giant robots helping people do specific tasks. Some lift the whole car, some lift doors, and some put bolts on the frames. Enzymes are like those giant robots. They grab one or two pieces, do something to them, and then release them. Once their job is done, they move to the next piece and do the same thing again. They are little protein robots inside your cells.
Four Steps of Enzyme Action
1. The enzyme and the substrate are in the same area. Some situations have more than one substrate molecule that the enzyme will change.
Can You Control Them?
Good question! We know what you're thinking: "What if enzymes just kept going and converted every molecule in the world? They would never stop. They would become monsters!" Don’t worry. There are many factors that can regulate enzyme activity, including temperature, activators, pH levels, and inhibitors. We’ll look at enzyme control in the next section.
Next Stop On Chem4Kids Tour
Next Page on Biochemistry
Return to Top of Page
Or search the sites for a specific topic.
Related Links
Chem4Kids: Enzyme Regulation
Biology4Kids: Scientific Method
Biology4Kids: Cell Structure
Geography4Kids: Carbon Cycle
Geography4Kids: Biosphere
Biochemistry Quiz
Enzymes Quiz
Enzyme Molecules as Nanomotors (American Chemical Soc.)
Did you know about the largets manmade molecule?
Source: Angewandte Chemie/Wiley
Chem4Kids Sections
Rader's Network of Science and Math Sites |
Tree Frog Dot-to-Dot
Standards 2.NBT.A.2
5.0 based on 5 ratings
What animal is hidden in the numbers? Challenge your second grader by having him complete this dot-to-dot worksheet by counting by fives to finish the picture. He'll have a blast exploring his inner mathematician and zoologist!
Second Grade Addition Multiplication Worksheets: Tree Frog Dot-to-Dot
Download Worksheet
How likely are you to recommend to your friends and colleagues?
Not at all likely
Extremely likely |
Friday 26 August 2016
Animal calls 'more like language'
Published 20/08/2014 | 00:21
The mockingbird can mimic more than 100 other birds' songs
The mockingbird can mimic more than 100 other birds' songs
Dr Dolittle's ability to talk to the animals may have at least some scientific basis, according to new research that challenges the uniqueness of human language.
• Go To
An analysis of vocalisations made by animals ranging from finches to whales and orang utans has shown they share more in common with human speech than was previously thought.
The findings threaten to overturn accepted ideas about the apparent randomness of animal noises - one of the key differences believed to separate humans and other species.
They suggest there may be a "missing link" step on the evolutionary path from animal communication to human language that has not yet been identified.
US lead scientist Dr Arik Kershenbaum, from the National Institute for Mathematical and Biological Synthesis in Knoxville, Tennessee, said: "Language is the biggest difference that separates humans from animals evolutionarily, but multiple studies are finding more and more stepping stones that seem to bridge this gap.
"Uncovering the process underlying vocal sequence generation in animals may be critical to our understanding of the origin of language."
Many animals produce apparently complex sounds. The mockingbird, for instance, can mimic more than 100 other birds' songs, while the hyrax, or rock badger, produces a range of wails, chucks and snorts that define its male territory.
But traditionally animal calls were thought at a fundamental level to lack the intelligent intricacy of human speech.
Until now, they were all assumed to involve a simple structural system known as the "Markov" process.
"Markov" vocalisations are made up of sequences that can easily be predicted by listening to a finite number of preceding elements.
Animal calls are said to be restricted by rigid Markovian rules of "regular" grammar that operate a little like a washing machine's automation programme.
Human language, in contrast, employs what are called "context-free grammars" that apply the same set of rules in widely different ways, making it much less predictable. It is non-Markovian.
Scientists conducting the new study looked for evidence of Markovian dynamics in the vocalisations of seven species - chickadees, finches, bats, orang utans, killer whales, pilot whales, and hyraxes.
They found little evidence of Markovian processes in animal calls. Instead, the sounds made by animals were more consistent with statistical models relevant to human language.
The researchers wrote in the journal Proceedings of the Royal Society B: "Our data suggest that non-Markovian vocal sequences may be more common than Markov sequences, which must be taken into account when evaluating alternative hypotheses for the evolution of signalling complexity, and perhaps human language origins."
Press Association
Read More
Promoted articles
Editors Choice
Also in World News |
Ap World History Presentation Verson1 3
Published on
Early Mesopotamia
Published in: Education, Technology
1 Comment
No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds
No notes for slide
Ap World History Presentation Verson1 3
1. 1. Mesopotamia “The Land Between the Rivers”
2. 3. KEY TERMS <ul><ul><ul><li>Fertile Crescent - an area of land in south east Asia that experienced an agricultural revolution </li></ul></ul></ul><ul><ul><ul><li>Agricultural Revolution - transition from a hunting and gathering society to settlements with farming </li></ul></ul></ul><ul><ul><ul><li>Sumer - one of the first cities in the fertile crescent </li></ul></ul></ul><ul><ul><ul><li>Patriarchal Society - a male dominated society </li></ul></ul></ul><ul><ul><ul><li>Cuneiform - an early system of writing developed in the fertile crescent </li></ul></ul></ul><ul><ul><ul><li>The Epic of Gilgamesh - collection of stories about Gilgamesh, the legendary God-king of Uruk </li></ul></ul></ul><ul><ul><ul><li>Hammurabi’s Code - first set of written laws </li></ul></ul></ul>
3. 4. KEY TERMS (cont.…) <ul><li>Hammurabi - a Babylonian emperor that ruled from 1792-1750 BCE. </li></ul><ul><li>City-States - independent cities with political and military control over large areas including nearby agricultural regions. </li></ul><ul><li>Ziggurat - a Sumerian temple which double as a center of trade and commerce. </li></ul><ul><li>lex talionis - “law of retribution” , or an eye for an eye, the punishment is the same as the crime. </li></ul><ul><li>Sargon of Akkad - Ruler before Hammurabi, ruled from 2370-2315 BCE </li></ul>
4. 5. What makes Mesopotamia Significant?
5. 6. Fertile Crescent The “Origin” of Agriculture <ul><li>Agricultural region because of the water from the </li></ul><ul><li>Tigris and Euphrates rivers. </li></ul><ul><li>Earliest farming communities. </li></ul><ul><li>= faster development. </li></ul><ul><li>By 6000 BCE, small scale </li></ul><ul><li>irrigation had begun. </li></ul><ul><ul><li>Led to surplus of food. </li></ul></ul><ul><ul><li>Forming of cities. </li></ul></ul>
6. 7. Sumer (southern Mesopotamia) <ul><li>People were attracted to Sumer because of it agricultural potential and later attracted because of its wealth. </li></ul><ul><li>By 5000 BCE, Sumerians were constructing complex irrigation networks. </li></ul><ul><li>By 4000 BCE they had built the world’s first cities. </li></ul><ul><ul><li>Served as marketplaces and cultural centers. </li></ul></ul><ul><li>But due to internal and external pressures, they transformed into city-states. </li></ul><ul><li>By 3000 BCE, it had a population of 100,000 and kings who had absolute power. </li></ul>
7. 8. Ziggurat
8. 9. <ul><li>Developed around 2900 BCE by the Sumerians to record commercial and tax documents </li></ul><ul><li>Used graphic symbols to represent sounds, syllables, ideas, and objects. </li></ul><ul><li>Replaced the earlier system of pictographs. </li></ul>Cuneiform <ul><li>Written with a reed stylus to impress symbols on wet clay. </li></ul><ul><li>Used for more than 3000 years. </li></ul>
9. 10. Hammurabi (Babylonian Empire) <ul><li>Emperor of Babylonia, ruled from 1792-1750 BCE </li></ul><ul><li>Improved on previous administrative techniques by relying on a centralized bureaucracy and regular taxations </li></ul><ul><li>Established Babylon as the capital and put deputies to manage different territories. </li></ul><ul><li>Enforced a system of more regular taxes instead of plundering conquered lands for wealth </li></ul><ul><li>Imposed a code of laws to maintain order in his empire. </li></ul>
10. 11. Hammurabi’s Code <ul><li>First set of documented laws. </li></ul><ul><li>Based on principal of lex talionis, the “law of retaliation”. </li></ul><ul><li>Included civil and criminal laws. </li></ul><ul><li>Laws were prejudiced; the upper class had different punishments compared to the lower class, and same is true for men and women, however laws still apply to the king. </li></ul><ul><li>The laws had a high standard of behavior with strict punishments to back it up. </li></ul><ul><li>Even though judges often relied on their own judgment when deciding on a case, these laws show an attempt to regulate punishments. </li></ul>
11. 12. Mesopotamia <ul><li>Political - divided into 3 social classes </li></ul><ul><li>(ruling class, priests, and commoners) </li></ul><ul><li>Economical - Sumerians = early example of established trade </li></ul><ul><li>Religious - polytheistic with a focus on nature gods </li></ul><ul><li>-built ziggurats as temples and centers of commerce </li></ul><ul><li>Social - patriarchal society, wealthy upper class </li></ul><ul><li>Intellectual - developed system of writing (cuneiform) and begun studies such as mathematics and astronomy </li></ul><ul><li>Artistic - many statues of gods </li></ul><ul><li>Near - Part of the fertile crescent in the middle east (modern day Iraq) </li></ul>
12. 13. Main Themes <ul><li>Theme 1. Humans v. Environment </li></ul><ul><ul><li>Population trends: population increased with settlements </li></ul></ul><ul><ul><li>Migration: many diverse peoples migrated to the area </li></ul></ul><ul><li>Theme 2. Cultural Interactions </li></ul><ul><ul><li>Religions: Polytheistic originally, but also monotheistic Jews </li></ul></ul><ul><ul><li>Inventions: small scale irrigations, bronze/iron metallurgy, ships, and made use of the wheel. </li></ul></ul><ul><ul><li>Developed cuneiform, astronomy, and mathematics. (Education) </li></ul></ul><ul><li>Theme 3. Political Structures, Expansion and Conflict </li></ul><ul><ul><li>Political structure: City-States </li></ul></ul><ul><ul><li>Government: First elected kings with and assembly, but later hereditary succession. Kings = sons of the gods. </li></ul></ul><ul><ul><li>Empires: Sumer->Sargon->Babylon->Assyrian->New Babylon. </li></ul></ul>
13. 14. <ul><li>Theme 4. Economic Systems </li></ul><ul><ul><li>Agriculture: developed effective irrigation and used commoners as laborers </li></ul></ul><ul><ul><li>Traded sheep, oxen, wheat, barley, pots, and fish. </li></ul></ul><ul><ul><li>Taxed the lower classes. Community projects. </li></ul></ul><ul><li>Theme 5. Social Structures </li></ul><ul><ul><li>Gender Roles: Patriarchal society, but women still had rights under Hammurabi’s code. </li></ul></ul><ul><ul><li>Social classes: </li></ul></ul>Main Themes (continued) |
Say hello to our new moon
PARIS (AFP) Mar 26, 2004
The asteroid, 2003 YN17, "is probably a chunk of debris" from an impact between a larger space rock and the surface of the moon, the British weekly says.
2003 YN17's orbital plane is roughly the same as the earth's, but its unusual path, compounded by a corkscrew-like track, means that sometimes it is ahead of us and sometimes it is behind.
"Since 1996, its path has taken it round the earth, making it a quasi-satellite. This phase will last until 2006," the report says.
The finders are a team led by Paul Chodas, an asteroid specialist at NASA's famed Jet Propulsion Laboratory in California.
Two other "quasi-moons" -- temporary fellow-travellers that loop around the earth for while as they girdle the sun -- have been spotted in recent years: Cluithne and asteroid 2002 AA29. |
List of wars extended by diplomatic irregularity
From Wikipedia, the free encyclopedia
Jump to: navigation, search
There are different claims of wars extended by diplomatic irregularity, sometimes by a small country named in a declaration of war being accidentally omitted from a peace treaty concerning the wider conflict. These "extended wars" have only been discovered after the fact, and have no impact during the long period (often hundreds of years) after the actual fighting ended.
The discovery of an "extended war" is sometimes an opportunity for a ceremonial peace to be contracted by the belligerent parties. This can boost tourism and the relations between states involved by providing interaction not before engaged in, and in some cases, starting relations that have not occurred for historical or geographic reasons. Ceremonial peace, in these cases, is often good natured and for this reason can involve the highest levels of government or foreign affairs offices.
Such a situation is to be distinguished from that of parties deliberately avoiding a peace treaty when political disputes outlive military conflict, as in the Kuril Islands dispute between Japan and Russia.
Extended wars[edit]
Combatants Historical conflict Declaration of war De facto peace De jure peace Status of claim
Isles of Scilly
Dutch Republic
First Anglo-Dutch War 1651 1654 1986 The Dutch Republic under Michiel Adriaenszoon de Ruyter declared war on the Isles of Scilly, as the final stronghold of the English Royalist naval force. When the Dutch and English republics signed the Treaty of Westminster (1654), this separate state of war was not mentioned and thus not included in the peace. The Dutch ambassador, visiting in April 1986 to conclude peace, joked that it must have been harrowing to the Scillonians "to know we could have attacked at any moment."[1]
Peninsular War 1809 1814 1981 Huéscar was at war with Denmark, as a result of the Napoleonic wars over Spain, where Denmark supported the French Empire. The official declaration of war was forgotten until it was discovered by a local historian in 1981, followed by the signing of a peace treaty on 11 November 1981 by the city mayor and the Ambassador of Denmark. Not a single shot was fired during the 172 years of war, and nobody was killed or injured.
Principality of Montenegro
Empire of Japan
Russo-Japanese War 1904 1905 2006[2] Montenegro declared war in support of Russia but Montenegro lacked a navy or any other means to engage Japan. After Montenegro (independent in 1904, but united with Serbia by 1919) had voted in 2006 to resume its independence, it concluded a separate peace treaty in order to establish diplomatic relations with Japan. See Japan–Montenegro relations.
German Empire
World War I 1914 1918 1958[3] Andorra was not invited to the Paris Peace Conference.
Costa Rica
German Empire
World War I 1918 1918 1945 Due to a dispute over the legitimacy of the government of Federico Tinoco Granados, Costa Rica was not a party to the Treaty of Versailles and did not unilaterally end the state of war.[4] The technical state of war ended after World War II only after they were included in the Potsdam Agreement. Costa Rica did not issue a declaration of war against Germany in World War II.[5]
Allies of World War II
World War II 1939 1945 1991
At the time World War II was declared over, there was no single German state that all occupying powers accepted as being the sole representative of the former Reich. The "war" technically did not finish until German reunification in 1990. However, in 1949 some technicalities were modified to soften the state of war between the U.S. and Germany. The state of war was retained since it provided the U.S. with a legal basis for keeping troops in Western Germany.[citation needed][6] As a legal substitute for a peace treaty[7] the U.S. formally ended the state of war between the U.S. and Germany on October 19, 1951 at 5:45 p.m. According to the U.S., a formal peace treaty had been stalled by the Soviet Union.[7] It was not until the Treaty on the Final Settlement with Respect to Germany was signed in 1990 that peace was formally established. The treaty came into effect on March 15, 1991.
UN Forces (led by United States)
Gulf War 1991 1991 2003 The UN resolution which ended the first Gulf War, only enacted a cease-fire. It did not end the state of war with Iraq.[8] The British Government would, 12 years later, use the de jure state of war with Iraq to provide the legal basis for the 2003 invasion of Iraq.[9]
Opponents of the Iraq War have criticised this interpretation, with one source labelling it as "legal gymnastics" (see Legality of the Iraq War).[10][11][12]
See also[edit]
1. ^ Britain: Peace In Our Time", Time, 28 April 1986.
2. ^ "Montenegro, Japan to declare truce". United Press International.
3. ^ "World War I Ends in Andorra", UPI story in the New York Times, Sep 25, 1958. p. 66. A number of sources say 1939, but there is no period confirmation for this.
4. ^ United States. Congress. Senate. Committee on Foreign Relations (1919). Treaty of peace with Germany: Hearings before the Committee on Foreign Relations, United States Senate, sixty-sixth Congress, first session on the Treaty of peace with Germany, signed at Versailles on June 28, 1919, and submitted to the Senate on July 10, 1919. Govt. Print Off. pp. 206–209. Retrieved 2013-02-09.
5. ^ "11 Wars That Lasted Way Longer Than They Should Have". Mental Floss.
6. ^ "THE NATIONS: A Step Forward". Time. November 28, 1949. Retrieved May 11, 2010.
7. ^ a b "National Affairs: War's End". Time. July 16, 1951. Retrieved May 11, 2010.
8. ^ Lord Goldsmith (2003-03-17). "A case for war". Retrieved 2015-11-01.
9. ^ David Morrison (2015-10-28). "Was Britain's military action in Iraq legal?". Retrieved 2015-11-01.
10. ^ Peter Oborne (2015-10-31). "Peter Oborne's unofficial Chilcot Inquiry into Iraq war".
11. ^ Elizabeth Wilmshurst (2005-03-24). "Wilmshurst resignation letter". Retrieved 2015-11-01.
12. ^ "Clegg clarifies stance after saying Iraq war 'illegal'". 2010-07-21. Retrieved 2015-11-01. |
Only bureaucrats can solve global warming.
Five years ago, South Carolina Republican Senator Lindsey Graham joined a handful of senators traveling to the Yukon territory to view firsthand the effects of climate change. Witnessing melting ice caps and permafrost, and Inuit communities struggling to cope with a transforming environment, Graham was “moved.” “Climate change is different when you come here, because you see the faces of people experiencing it,” he said. In the following years, he asserted that “climate change is real” and promoted a cap-and-trade bill in the Senate.
Today, Graham is sprinting in the other direction. In April, he abandoned his climate bill when Democrats decided to focus on immigration reform first. He remained opposed even when they ultimately agreed to take it up. These days, he is refusing to acknowledge that carbon-dioxide emissions cause warmer temperatures. “I think they’ve been alarmist and the science is in question,” he says. Graham no longer sounds especially moved by the plight of the Inuit, who may be facing a threat to their way of life but are not facing the threat of a right-wing primary challenge.
The canary in the coal mine is a classic metaphor for the science of climate change. For the politics of climate change, Graham is the canary. Once the sole Senate Republican supporting cap-and-trade, he’s keeled over in his cage, his limp corpse a sign that Congress can’t handle this issue. There’s only one solution at hand: Let the Environmental Protection Agency (EPA) impose regulations to stop climate change.
Three years ago, the Supreme Court ruled that the Clean Air Act compelled the EPA to regulate carbon-dioxide emissions. Hardly anybody thought the agency would actually do so. The threat of regulation was considered leverage to force Congress into passing legislation. But, while Congress would like to see carbon-dioxide emissions reduced, it also shows no sign of willingness to handle the job itself. So it is up to the EPA.
The widespread assumption that Congress would act arose from a consensus among economists that an elegant legislative solution would work better than blunt regulatory fiat. It’s certainly true that a cap-and-trade system would reduce carbon more efficiently in theory. The Clean Air Act only provides for setting a single, across-the-board standard for emissions. Some industries would undergo great expense by dropping their emissions below the standard. Others could drop their emissions far below the standard at minimal cost, but they have no incentive to do so.
A cap-and-trade arrangement would bring market forces to bear on the problem. Congress would define the total permissible level of emissions. Companies that could cheaply reduce their emissions a great deal would sell allowances to those that could only do so at great expense. Thus, we would achieve the maximum reduction in carbon emissions at the minimum cost.
But you probably need to write a new law to set up cap-and-trade. And that’s proven messy. The House filled its cap-and-trade bill with sundry exemptions and special permits, significantly dampening its efficiency. The Senate, with its self-imposed 60-vote threshold, probably can’t pass any cap-and-trade bill at all. Whatever the inefficiencies imposed by the regulatory approach, they pale before the inefficiencies imposed by the legislative system.
In 1997, economist Alan Blinder made a provocative case for putting problems in the hands of unelected experts. Writing in Foreign Policy, Blinder recalled his experience at the Council of Economic Advisers, where, he wrote, “a policy’s merits can quickly get buried under a mountain of political detritus even before the policy emerges from the White House pressure cooker. Then it goes to Congress, where things only get worse.” Blinder left to work at the Federal Reserve, where decision-making, while imperfect, followed policymakers’ sense of the public good.
Blinder did not argue for dictatorship by wonk. (Wonktatorship?) Instead, he suggested that certain kinds of policies better lend themselves to government-by-expert rather than government-by-Congress. Blinder did not mention carbon emissions, but his three criteria turn out to describe the issue to a tee.
First, it’s a technical issue requiring specialized knowledge. (Anybody unfortunate enough to witness the blustering assaults on climate science by meatheads like James Inhofe or Joe Barton can easily grasp the pitfalls of allowing Congress to adjudicate science.) Second, the issue requires long time horizons. Congress is not designed to minimize the risks of catastrophes that might take place decades hence. Nor is it prone to consider the interests of potential future industries, like renewable energy, alongside existing ones. And third, the issue requires imposing short-term pain in order to avert long-term costs, a trade-off most pols are loath to make.
The obvious objection to EPA regulation is that it’s undemocratic: Removing an issue from “politics” is a nice way of saying you’re removing it from accountability to the people. The objection has some truth, but less than meets the eye. While Congress never dreamed of limited carbon dioxide emissions when it passed the Clean Air Act four decades ago, the legislation did authorize the agency to set regulatory standards limiting air pollutants, “which in [the administrator’s] judgment cause, or contribute to, air pollution which may reasonably be anticipated to endanger public health or welfare.” The Supreme Court has correctly ruled that greenhouse gases qualify as such a pollutant.
And, while EPA regulation would distance the issue from democratic accountability, it would not remove it entirely. If the public objected strongly enough, it could vote for a Congress to overturn the EPA regulation.
In practice, we accept this kind of rule-by-expert routinely. Monetary policy, once the most political of issues—i.e., William Jennings Bryan denouncing the “cross of gold”—is now formulated with little regard for public opinion. Congress could overrule a decision by the Fed, but it never does. Likewise, few of us would like Congress to seize control of setting food and drug standards from the FDA.
If that doesn’t satisfy you, there’s the blunt reality that the current Senate may be the most liberal one we have for a long time, and it probably isn’t liberal enough to pass a climate-change law. On most issues, I’d wait until Congress can act. But are you willing to take a chance of sustaining irreversible climate damage for the sake of maximal democratic accountability? I’m not.
Jonathan Chait is a senior editor of The New Republic.
|
Please create an account to participate in the Slashdot moderation system
Forgot your password?
Compare cell phone plans using Wirefly's innovative plan comparison tool ×
Math Hardware
Know How To Use a Slide Rule? 388
high_rolla writes "How many of you have actually used a slide rule? The slide rule was a simple yet powerful and important tool for engineers and scientists before the days of calculators (let alone PCs). In fact, several people I know still prefer to use them. In the interest of preserving this icon we have created a virtual slide rule for you to play with." Wikipedia lists seven other online simulations.
Know How To Use a Slide Rule?
Comments Filter:
• E6-B (Score:1, Informative)
by Anonymous Coward on Friday September 28, 2007 @12:56PM (#20784263)
The E6-B [] that every pilot learns to use in ground school is basically a special-purpose circular slide rule.
• by BlackPignouf ( 1017012 ) on Friday September 28, 2007 @01:09PM (#20784461)
What is the point of having such a long rule, if you only see a part of it and cannot move both parts at the same time???????
At least, this one is usable: []
• Re:At least (Score:3, Informative)
by inKubus ( 199753 ) on Friday September 28, 2007 @01:10PM (#20784481) Homepage Journal
This slide rule is broken.
Try this one [], it's much better and actually correctly laid out :)
• I have (Score:4, Informative)
by ajs318 ( 655362 ) <.ku.oc.dohshtrae. .ta. .2pser_ds.> on Friday September 28, 2007 @01:16PM (#20784565)
When my grandad died, he left his "old" slide rules to my dad and me. My dad kept the original wood and cellulose one from the 1940s; I got the plastic one from the 1960s / 70s.
I soon got the hang of using it (and it can be quicker than a calculator sometimes), but I knew the general principle from before anyway. The main thing you have to remember is the slide rule only ever gives you the mantissa; you have to work out the exponent yourself. This means you have to do a rough mental calculation. People often put too much trust in calculators. When I was filling in order forms by hand in a previous job, I never used a calculator -- and I never got called out on a wrong total.
• by fiid ( 4432 ) on Friday September 28, 2007 @01:21PM (#20784657)
The E6-B is a rotary slide rule that pilots use for calculating wind correction angles, time/speed/distance problems, conversion between units (i.e. weight of a certain number of gallons of fuel), and fuel consumption.
It's preferred over digital devices because they still work when the batteries go flat, they are easy to use with one hand, and some models are actually smaller. []
• Re:Um No. (Score:4, Informative)
by rubycodez ( 864176 ) on Friday September 28, 2007 @01:30PM (#20784763)
actually, ignorant people in survival situations make all kinds of bad decisions and don't know how to treat or stabilize someone with injury. knowing poisonous from edible plants, cleaning properly cooking an animal without contaminating it, these are all things that people knew in that recent past but you'd better learn now rather than by trial and error (you die or are maimed for life if you're wrong)
• For dumb Americans: (Score:3, Informative)
by morgan_greywolf ( 835522 ) on Friday September 28, 2007 @01:55PM (#20785173) Homepage Journal
'Gymnasium' is what they call 'high school' in some countries.
• by darrint ( 265374 ) on Friday September 28, 2007 @02:07PM (#20785369) Homepage
Better instructions... []
• by dereference ( 875531 ) on Friday September 28, 2007 @02:12PM (#20785449)
I have nothing to do with that site, and as many others have mentioned there are far better virtual slide rules available, but I did learn long ago how one of these things operates, and the instructions in TFA are horrible even if you know what you're doing.
First, the term "index" has the old-school meaning of the number "1" and it appears at either end of the C scale. On the left side of C it means 1, and the right side it means 10, but there are no actual decimal points involved (you're on your own for order of magnitude with this device) so they're equivalent at either end. Also, for multiply and divide, you don't need that hairline slider that covers all the bars; that's only useful if you need to align two values on non-adjacent rules. Just slide the center bar (the one that holds scale C) back and forth.
The other important point to note is that you'll see numerals 1-9 between 1 and 2; those are just convenience markers for 1.1 through 1.9. That first (smaller) 2 you see, reading from left to right, is really 1.2 not 2.0.
So to multiply 6x2, we can go either direction, starting at 2 and multiplying by 6, or starting at 6 and multiplying by 2. To start at 2, slide the center part of the bar so that the right-hand "index" (1) of scale C is directly above the 2.0 on scale D. Now to "multiply" you don't do anything; you just read the result, which is found on scale D, directly under the 6 you wanted to multiply. Here you'll see 1.2 is directly under the 6.
Wait, though, we used the right-hand index, which is 10 not 1, so we need to multiply the result by 10. So 1.2 becomes 12 (which is why I said you have to do your own decimal point management). To start at 6 instead, slide the right-hand "index" (1) of scale C directly above the 6 on scale D; your answer will on D again, directly under the 2.0 of slide C. Again, we used the right-hand index of 10, not 1, so we multiply the 1.2 by 10 to get 12.
How did I know to use the right-hand index rather than the left-hand index? Well, if you slide the left-hand index of C all the way to 2.0 on D, you'll notice that the 6 you need to multiply is off the edge of the device--an overflow, if you will--so you must essentially work with 10 rather than 1 and move the decimal at the end.
With this extremely trivial example, you should be able to follow the rest of the terribly-written instructions FTFA for divide (although you can do significantly more with a slide rule than just multiply and divide).
• Re:Mildot Master (Score:3, Informative)
by Rudisaurus ( 675580 ) on Friday September 28, 2007 @02:33PM (#20785821)
And pilots! Don't forget the E6B flight calculator [] -- a.k.a. whiz wheel. Its use is still taught in ground school today.
• Re:Of course (Score:3, Informative)
by Rick17JJ ( 744063 ) on Friday September 28, 2007 @03:39PM (#20786771)
If anyone is interested, here are several links to downloadable ebooks and manuals for using slide rules:
My only experience with using a slide rule was back in the 1960s in an 8th grade math class where we spent two weeks learning to use slide rules. We were just 8th graders, but were able to use a few basic features of something that was normally used mostly by scientists and engineers. Mr. Turner, our math instructor, even wore a small slide rule as a tie clasp. I suspect that the use of slide rules was something that probably was not normally taught to 8th graders.
Later on in Junior College, I once thought about possibly taking a 1 credit slide rule class, but didn't. That was in the days back before pocket calculators. In the College Algebra class our textbook had Log tables, a square root table and various other tables in the appendixes in the back which we used to get answers without a pocket calculator (or a slide rule).
I still have my dad's old Ivory and wood slide rule that he bought back in the 1950s and also a more modern plastic slide rule which I later purchased. I am plan to briefly brush up on how to use them just for the heck of it.
• Re:Of course (Score:4, Informative)
by dbc ( 135354 ) on Friday September 28, 2007 @04:34PM (#20787577)
At the university I attended, freshmen engineering in 1973-74 required the use of a slide rule. In the 74-75 year, you could take freshmen engineering with either a "slip-stick" or a calculator. My freshmen year, fall of '75, required a pocket calculator. I was facile with a slide from from high school chemistry and physics, and can still do the basics, but haven't used one since. So the transition from slide rule to calculator was very fast.
A slide rule enforces estimating a reasonable answer before hand, and encourges arranging computations for economy of calculation. I think there is a big benefit to critical thinking skills in praticing basic computation with a slide rule.
That said, computers have made it possible to do what was formerly impossible due to computational expense. Integrated circuits would not be where they are if you couldn't burn many flops running spice. Cars would weigh more and get less gas mileage without mechanical simulations because they would have to be over-built in order to simplify strength calculations. Pre-computer-simulation camera optics suck when compared to modern computer optimized lens, ditto for antennas.
I once met a guy whose mother was a computer.... that was her job title: "computer". She worked for a university research department, where row upon row of "computers", mostly women, sat in front of mechanical calculators all day long, 40 hours per week, cranking through tablets of computations for various numerical models. Modern electronic computers enable solutions to problems there were too expensive to attack before, and life *is* better as a result.
• by Chris Tucker ( 302549 ) on Friday September 28, 2007 @05:18PM (#20788153) Homepage
Ebay. Search for "Pickett slide rule"
Grab a Microline 120 or 140 for about US$10.00.
Yes, it's plastic, but it's a damn fine slipstick for a beginner, and there's several "How to use a Slide Rule" books on the Gutenberg site.
|
An Examination of Childhood Grief
Grief is a natural part of life and a shared experience by all human beings at one point or another. Children’s experience with grief is a unique, subjective experience as it is for all humans. However, societal myths continue to perpetuate a cycle of misunderstandings surrounding children and their experiences of loss. Grief is a complex subject, therefore, it is necessary to explore its history through its definition and associated grief theories. It is also a necessity to understand the various kinds of grief and explore different associated examples. Children are influenced by different kinds of grief and the developmental level of a child also influences the way he or she copes with grief. Finally, Adlerian theory holds a connection to grief work through the components of life tasks, lifestyle, social interest, and subjectivity.
Katey Lindell
Year Completed:
Number of Pages:
PDF icon Lindell MP 2013.pdf313.09 KB |
Translation of "general" - English-Mandarin Chinese dictionary
adjective uk /ˈdʒen.ər.əl/ us /ˈdʒen.ər.əl/
The general feeling is that justice was not served. 所有与会者都感到应该投票表决。
There is general concern about rising crime rates. 人们普遍对犯罪率上升感到担忧。
My general impression of the place was good. 我对这个地方总的印象很好。
The talk is intended to be of general interest (= of interest to most people). 这次商谈是要探讨大家普遍感兴趣的话题。
UK formal Rain will become more general in the southeast during the afternoon. 降雨范围将在下午进一步扩展到东南地区。
in general
B1 also as a general rule usually, or in most situations
In general, men are taller than women. 一般来说,男人个子比女人高。
As a general rule, we don't allow children in the bar. 一般情况下,我们不允许儿童进入酒吧。
B2 considering the whole of someone or something, and not just a particular part of him, her, or it
So, apart from the bad ankle, how are you in general? 那么,除了踝部的伤病,你身体总的来说怎么样?
be in the general interest formal
to be a good thing for the public
The government will only say it is not in the general interest to reveal any more information. 政府只能说,透露更多的信息有损公众的利益。
B1 not detailed, but including the most basic or necessary information
What he said was very general. 他说得很笼统。
The school aims to give children a general background in a variety of subjects. 学校旨在向孩子们概要介绍各种学科的背景知识。
I'm not an expert, so I can only speak in general terms on this matter. 我不是专家,我只能泛泛地谈这个问题。
the general
things considered as a unit and without giving attention to details
His book moves from the general to the particular. 他的书先介绍总体情况然后探讨具体问题。
B2 including a lot of things or subjects and not limited to only one or a few
general knowledge 常识
used as part of the title of a job of someone who is in charge of a whole organization or company
the general manager 总经理
the General Secretary of the UN 联合国秘书长
noun [ C ] uk /ˈdʒen.ər.əl/ us /ˈdʒen.ər.əl/
also General an officer of very high rank, especially in the army
He was promoted to the rank of general.
General Brown/Roger Brown
[ as form of address ] Thank you, General.
(Translation of “general” from the Cambridge English-Chinese (Simplified) Dictionary © Cambridge University Press) |
Foundation for Mind-Being Research Editorial
A new deck of 52 cards usually has two jokers. Likewise there are two jokers that bedevil physics -- zero and infinity. They represent powerful adversaries at either end of the realm of numbers that we use in modern science. Yet, zero and infinity are two sides of the same coin -- equal and opposite, yin and yang. "Multiply zero by anything and you get zero. Multiply infinity by anything and you get infinity. Dividing a number by zero yields infinity; dividing a number by infinity yields zero. Adding zero to a number leaves the number unchanged. Adding a number to infinity leaves infinity unchanged." Yet, the biggest questions in science, philosophy, and religion are about nothingness and eternity, the void and the infinite, zero and infinity.
Zero is behind all of the big puzzles in physics. In thermodynamics a zero became an uncrossable barrier: the coldest temperature possible. In Einstein's theory of general relativity, a zero became a black hole, a monstrous star that swallows entire suns and can lead us into new worlds. The infinite density of the black hole represents a division by zero. The big bang creation from the void is a division by zero. In quantum mechanics, the infinite energy of the vacuum is a division by zero and is responsible for a bizarre source of energy -- a phantom force exerted by nothing at all. Yet dividing by zero destroys the fabric of mathematics and the framework of logic -- and threatens to undermine the very basis of science.
The biggest challenge to todays physicists is how to reconcile general relativity and quantum mechanics. However, these two pillars of modern science were bound to be incompatible. "The universe of general relativity is a smooth rubber sheet. It is continuous and flowing, never sharp, never pointy. Quantum mechanics, on the other hand, describes a jerky and discontinuous universe. What the two theories have in common -- and what they clash over -- is zero." "The infinite zero of a black hole -- mass crammed into zero space, curving space infinitely -- punches a hole in the smooth rubber sheet. The equations of general relativity cannot deal with the sharpness of zero. In a black hole, space and time are meaningless."
"Quantum mechanics has a similar problem, a problem related to the zero-point energy. The laws of quantum mechanics treat particles such as the electron as points; that is, they take up no space at all. The electron is a zero-dimensional object, and its very zerolike nature ensures that scientists don't even know the electron's mass or charge." But, how could physicists not know something that has been measured? The answer lies with zero. According to the rules of quantum mechanics, the zero-dimensional electron has infinite mass and infinite charge. As with the zero-point energy of the quantum vacuum, "scientists learned to ignore the infinite mass and charge of the electron. They do this by not going all the way to zero distance from the electron when they calculate the electron's true mass and charge; they stop short of zero at an arbitrary distance. Once a scientist chooses a suitably close distance, all the calculations using the "true" mass and charge agree with one another." This is known as renormalization -- the physicist Dr. Richard Feynman called it "a dippy process."
The leading approach to unifying quantum theory and general relativity is string theory. In string theory each elemental particle is composed of a single string and all strings are identical. The "stuff" of all matter and all forces is the same. Differences between the particles arise because their respective strings undergo different resonant vibrational patterns -- giving them unique fingerprints. Hence, what appear to be different elementary particles are actually different notes on a fundamental string. In string theory zero has been banished from the universe; there is no such thing as zero distance or zero time. Hence, all the infinity problems of quantum mechanics are solved.
But, there is a price that we must pay to banish zero and infinity. The size of a typical string in string theory is the Planck length, i.e., about 10-33 centimeters. This is over a thousand trillion times smaller that what the most advanced particle detection equipment can observe. Are these unifying theories, that describe the centers of black holes and explain the singularity of the big bang, becoming so far removed from experiment that we will never be able to determine their correctness? The models of the universe that string theorists and cosmologists develop might be mathematically precise, beautiful and consistent and might appear to explain the nature of the universe -- and yet be utterly wrong. Scientific models/theories, philosophies, and religions will continue to exist and be refined. However, because of zero and infinity, we can never have "proof". All that science can know is that the cosmos was spawned from nothing, and will return to the nothing from whence it came.
References for quotes and further background: Charles Seife, Zero: The Biography of a Dangerous Idea, NY: Viking, 2000, pp.191-209: Brian Green, The Elegant Universe: Superstrings, Hidden Dimensions, and the Quest for the Ultimate Theory, NY: Vintage Books, 1999.
William C. Gough , Mar 2002
Return to the list of editorials .
Return to the FMBR Home Page.
Updated Apr 10, 2002. |
You know how car windows don't shatter like normal glass windows? When they get smashed, they sag like fabric. What kind of sorcery is this? It's safety glass, and it's AWESOME!
Safety glass is made by laminating two pieces of glass together with a sheet of plastic in the middle. You heat up this little sandwich and press everything together, and the plastic melts and sticks to the glass. Once everything cools down, it looks just like a normal piece of glass, but if you smash it, the plastic holds everything together. The glass is now a composite, and it's notably stiffer than a single sheet.
As it turns out, it's really easy to make this yourself using just a bit of plastic, binder clips and a toaster oven. And once you can laminate glass, life just gets better and better.
Ready? Go!
Step 1: What you need
I'm just experimenting with this stuff, so I'm using itty bitty pieces of material. The same process works for larger pieces of glass, though.
I used:
glass microscope slides -- $.05 each from your local science depot. Tonight, I learned that, in Manila, you can buy microscope slides at the drug store. Isn't that awesome?
EVA film -- This is a thermoplastic film that will hold your glass together. You can get it pretty cheap on ebay. Get yourself a nice bialy roll of the stuff, because you're gonna want to play with this a lot.
Binder clips -- available wherever paper is looseleaf
A cheapo toaster. When playing with chemicals, I like to use dedicated equipment so I don't accidentally eat my experiments. Heated plastics play with our bodies in all kinds of ways that we don't understand. For $10, I can get an extra toaster and avoid being an inadvertent guinea pig. Plus--science toaster!
<p>This is a brilliant project. thanks for the post. Will this work with large pieces..ie window glass.. I mean is the plastic you use here the same for larger projects that are required to meet code? If so I am gonna go and laminate all my downstairs house window panes </p>
<p>did the flowers work the same way </p>
In Australia, "safety glass" means the stuff that shatters into thousands of little bits, so that there are no large shards that could penetrate the body. <br> <br>Your example is called "laminated" and is usually only available for windshields. When my car's back window shattered, I made sure the replacement had tint film right to the edges before being fitted back into the rubber seal. If it shatters again, it will be held in place by the tint film. <br> <br>Ironwolfcanada mentions that "safety glasses" are not made from glass. They are usually made from polycarbonate, which means they don't like exposure to oils. <br> <br>Have you experimented with various colors introduced into the plastic layer?
Mate I make safety glass for a living in Australia and the majority of safety glass in windows is laminated with pvb not just car wind shields. The glass that shatters into tiny pieces is furnaced glass aka toughened. We laminate that too. There is regulations on what type of glass that is glazed into building. High rise buildings should be 13.52mm laminated toughened. Normal houses regulate that windows have to be 4 mm float and thicker.
Safety glass is made for glass. The plastic that sandwiched in between the two planes of glass is polycarbonate plastic.
From my wikipedia search, both of the types of glass you mention are solutions to the same problem of keeping glass from becoming millions of tiny knives when it breaks. The glass that shatters into thousands of tiny bits is tempered glass, and it's the strongest glass available (according to wikipedia).<br><br>The laminated stuff I make in this instructable is actually pretty common in architecture. If you run your finger along the edges of cheap, fancy glass walls, you'll feel something rubbery -- that's the edge of the EVA or TVB film in between two laminated pieces of glass.<br><br>I saw somewhere that you can also laminate polycarb to glass, which sounds pretty frickin cool. I really like the idea of laminating multiple colors together--I'll have to go dig up some polycarb and give it a try
I think the stuff that shatters into little bits is tempered. I think that kind is best for cars, because once broken, it has no leverage to slice someone open. The laminated stuff seems like a good option for tall buildings so people can't fall out of a broken window. <br> <br>Prank, have you experimented with multiple layers of glass to stop projectiles?
I like this idea very much, but when I searched eBay, I couldn't find the thermal plastic product under the name you supplied. Can you give more information please?
Many thanks for that Prank, the concept and discriptions you have given can help anyone who wants to do this project for making their own laminate. <br>Noel
Hmmm. Did you check the link from the materials page? It goes to an ebay vendor that's selling EVA film, and this ebay search for EVA film turned up a bunch of results--http://www.ebay.com/sch/i.html?_trksid=p5197.m570.l1313&_nkw=EVA+film&_sacat=0<br><br>
Here is a link to EVA Plastic Films <br>http://www.blueridgefilms.com/eva_film.html <br> <br>And here is another link to all kinds of links to Eva films <br>http://isearch.avg.com/search?cid={3F5688F2-A5DC-4CFB-8282-3678D765DDA5}&mid=6826f94916fccf73643815153ead7b87-9557abddcd2a82ce3aec9d57c4d569e029d61ac8&ds=AVG&lang=en&v=;pr=pr&d=2012-03-29%2016:55:24&sap=dsp&q=eva+film+properties <br> <br>Hope this will end the controversy here.;>}}
This would be great for picture frames!
also have you noticed how well insolated the car is with windows all over it? i think larger glass panes might be a cheap alternitive to the sweet double pane windows. plus when i toss a bat through them again it wont make such a mess. <br> <br>yes when i was in high school i accidently threw a bat through my grandmothers back window. double pane and 9 billion shards of glass later the bill was close to 600 bucks. and that was in the 90s.
Not sure where you live but respectfully you seem to never spent much time in a car or in a home with single pane glazing in cold weather. Glass is great conductor of heat, not an insulator.
I have worked with laminated glass, can size cut and finish it. <br> <br>This is a great kids science experiment. Interesting, but not suitable for real glazing. <br> <br>Anyone wanting "laminated glass" cannot go wrong just going to a glazier. It is relatively cheap to have glass cut to size, choosing strength and thickness of quality laminated glass, for automotive use or domestic. <br> <br>The glass 'experiment' is most definitely not 'safety glass' as that is toughened glass. Toughened glass is either heat strengthened or fully tempered glass. <br>http://www.youtube.com/watch?v=uA7RTxX2KL0 <br> <br>Laminated manufacturing of glass is done with annealed glass. <br>http://youtu.be/VTKLjtcbnn4 <br> <br>For some laminated windscreens fitted in certain European model vehicles glass is heat strengthened and zone toughened laminated glass. Both glass processes used for occupant safety. <br>
Interesting idea - thank you. <br>The EVA film is expensive! I wonder if you could use 'low E' film instead. <br>
Neat idea, and has numerous applications, <br>BUT, not for "safety glasses"!!! <br>When safety glass breaks, tiny shards of glass DO separate from the plastic <br>core, and if you were wearing "safety glasses" made with glass, <br>it's highly probable that these shards of glass would get in your eyes. <br>This is why 'safety glasses' are NOT made from glass! <br>Keep up the neat ideas!
excellent instructable......
Very good idea! <br> <br>I like the "captions" on the video, because I can read English but I don't understand it when spoken. Thanks for that.
About This Instructable
Add instructable to: |
Chancellor family
05/05/2013 - 4:15pm
Now that the 150th anniversary of the Battle of Chancellorsville is upon us, it seems a fitting time to look at how the lives of a family of mainly young women were affected by being suddenly thrust into a war zone and how they were able to survive with the aid of an enemy officer. Sue Chancellor was only fourteen when the area around her home became a bloody battlefield. Their house, called Chancellorsville, was used for a headquarters by first the Confederate and then the Union army while the family continued to live there.
Subscribe to Chancellor family |
Famous Scottish baronial style buildings Buildings
Famous Scottish baronial style buildings
1k views 18 items tags f p @
List of famous buildings in the Scottish baronial style movement, listed alphabetically with photos when available. This list of Scottish baronial style buildings, structures and monuments includes information like what city the structure is in, and when it was first opened to the public. There are a lot of historic Scottish baronial style structures around the world, so why not save some money and check them out here without having to pay for travel? These popular Scottish baronial style buildings attract visitors from all over the world, so if you're ever near them you should definitely pay them a visit. A factual list, featuring items like Balmoral Castle and Stirling Castle.
This list is a great source for answering the questions, "What are the most famous Scottish baronial style buildings?" and "What do Scottish baronial style buildings look like?"
List Photo: Freebase/CC-BY-SA-2.0
G Options B Comments & Embed
1. 1
Balmoral Castle is a large estate house in Royal Deeside, Aberdeenshire, Scotland. It is located near the village of Crathie, 6.2 miles west of Ballater and 6.8 miles east of Braemar. Balmoral has been one of the residences for members of the British Royal Family since 1852, when the estate and its ...more
2. 2
The Banff Springs Hotel is a luxury hotel that was built during the 19th century as one of Canada's grand railway hotels, being constructed in Scottish Baronial style and located in Banff National Park, Alberta, Canada. The hotel was opened to the public on June 1, 1888. Presently, The Fairmont Banff ...more
3. 3
Belfast Castle is set on the slopes of Cavehill Country Park, Belfast, Northern Ireland in a prominent position 400 feet above sea level. Its location provides unobstructed views of the city of Belfast and Belfast Lough. more
4. 4
Canadian Museum of Nature
The Canadian Museum of Nature is a natural history museum in Ottawa, Ontario, Canada. Its collections, which were started by the Geological Survey of Canada in 1856, include all aspects of the intersection of human society and nature, from gardening to gene-splicing. The Museum is affiliated with the ...more
5. 5
Casa Loma is a Gothic Revival style house and gardens in midtown Toronto, Ontario, Canada, that is now a museum and landmark. It was originally a residence for financier Sir Henry Mill Pellatt. Casa Loma was constructed over a three-year period from 1911–1914. The architect of the mansion was E. J.more
6. 6
Chateau Qu'Appelle
Chateau Qu'Appelle is listed (or ranked) 6 on the list Famous Scottish baronial style buildings
Photo: Freebase/Public domain
The Chateau Qu'Appelle was a Grand Trunk Pacific Railway hotel planned for Regina, Saskatchewan. Construction was started in 1913 at the corner of Albert Street and 16th Avenue. Rising costs, labour and material shortages, and the bankruptcy of the railway stopped the project before it was completed.more
7. 7
Dunrobin Castle is a stately home in Sutherland, in the Highland area of Scotland, and the family seat of the Earl of Sutherland and the Clan Sutherland. It is located 1 mile north of Golspie, and approximately 5 miles south of Brora, overlooking the Dornoch Firth. Dunrobin's origins lie in the Middle ...more
8. 8
Fair Lane was the name of the estate of Ford Motor Company founder Henry Ford and his wife Clara Ford in Dearborn, Michigan, in the United States. It was named after an area in County Cork in Ireland where Ford's adoptive grandfather, Patrick Ahern, was born. The 1,300-acre estate along the River Rouge ...more
L List Options B Comments & Embed z Share Next List > |
Translate this website into the following languages:
Close Tab
UC San Diego Health
menu iconMenu
search iconSearch
Kaposi Sarcoma
Kaposi sarcoma (KS) is a rare type of cancer caused by a strain of the human herpes virus, known as the human herpes virus 8. The virus selectively attacks endothelial cells lining the walls of the blood vessels, causing tumors and bleeding into surrounding tissue.
A person with KS may have red or purple lesions on the skin that do not fade with time. The virus may also affect membranes lining the mouth, nose, gastrointestinal tract or lungs. In rare cases, the disease may spread to the liver or brain.
Most people who harbor the virus will never get the cancer. The primary risk factor for developing KS is having a compromised immune system, either because of co-infection with the HIV virus or because of complications following an organ transplant. Older men of Mediterranean or Ashkenazi Jewish decent (who do not have any apparent loss of healthy immune function) may have a genetic vulnerability to the virus, as well.
diagnosis of KS is based on examining tumor tissue samples under the microscope and documenting the presence of the virus.
At UC San Diego Health, specialists from Owen Clinic and Moores Cancer Center collaborate to deliver the best treatment for individuals with Kaposi sarcoma.
All types of KS are treated based on the extent of the cancer.
If there are only a couple lesions on the skin, the first course of action is to review any medications that you are taking that might be suppressing the immune system. Your doctor may then suggest stopping the immunosuppressing drug, lowering its dose or switching to an alternate drug.
Glucocorticoids (such as Cortisol, Cortisone and Prednisone), for example, should not be taken by individuals with KS.
Patients with more extensive KS, affecting larger areas of the skin or other organs, may be treated with infusion therapy (chemotherapy). Infusion therapy for KS is usually a single-agent drug administered at a lower dose than for more typical cancers. Because of this, the side effects of infusion therapy are also diminished.
Because the treatment for KS does not kill the virus that causes the disease, complete remission from KS is rare. In this way, KS shares many similarities with warts.
Control of KS, however, is the norm, and for most people can be expected.
In addition, as part of the National Cancer Institute's AIDS Malignancy Consortium, we frequently have clinical trials of new therapies for those who are not responding to standard options.
Appointments & Referrals
La Jolla
Meet Our Specialists
Related Links
Clinical Trials
|
Question types
Start with
Question limit
of 30 available terms
Print test
5 Written questions
5 Matching questions
1. smooth er
2. chromatin
3. nuclear lamina
4. ribosomes
5. scanning electron microscope
1. a a complex of proteins and DNA
2. b an electron beam scans the surface of the sample. The beam excites electrons on the surface and these secondary electrons are detected. The result is the image
3. c a netlike array of protein filaments that maintains the shape of the nucleus by mechanically supporting the nuclear envelope
4. d complexes mad of ribosomal RNA and protein
5. e surface lacks ribosomes
5 Multiple choice questions
1. takes cells apart and separates the major organelles and other sub cellular structures from one another.
2. encloses the nucleus, separating its contents from the cytoplasm.
3. structures that carry the genetic information
4. region that is not membrane enclosed
5. carries out a variety of tasks in the cell such as synthesis of proteins and their transport into membranes and organelles or out of the cell.
5 True/False questions
1. vesiclesa semi fluid, jellylike substance
2. transport vesiclessacs made of membrane
3. cystola semi fluid, jellylike substance
4. phagocytosisa complex of proteins and DNA
5. nucleolusregion that is not membrane enclosed |
From Wikipedia, the free encyclopedia
Jump to: navigation, search
A red, Mini Cooper
The Mini is a small car made by the British Motor Corporation (BMC), British Leyland and Rover from 1959 to 2000. It used a transverse engine and front-wheel drive, where the turning power was put on the front wheels of the car rather than the back wheels. Its design saved a large amount of space. It allowed most of the car's size to be used for passengers and luggage. It had only two doors, but could seat up to four passengers.
The design was very influential for car-making in the second half of the 20th century.[1] In 1999, the Mini was voted the second most influential car of the 20th century, behind the Ford Model T.[2][3] The original model is considered an icon of the 1960s in Britain.[4][5][6]
The original Mini was designed for BMC by Alec Issigonis.[7][8]
It was first released in August 1959. Rover ceased production in October 2000. It was marketed under the names Austin, Morris, Cooper, Wolseley, Riley, British Leyland and Rover.
References[change | change source]
1. Buckley, Martin; Rees, Chris (2006). Cars: An encyclopedia of the world's most fabulous automobiles. Hermes House. ISBN 1-84309-266-2. "The BMC Mini, launched in 1959, is Britain's most influential car ever. It defined a new genre. Other cars used front-wheel drive and transverse engines before but none in such a small space."
2. "This Just In: Model T Gets Award", James G. Cobb, The New York Times, 24 December 1999
3. Strickland, Jonathan. "How the MINI Cooper Works". Retrieved 20 July 2010.
4. Reed, Chris (2003). Complete Classic Mini 1959–2000. Orpington: Motor Racing. ISBN 1-899870-60-1.
5. Reed, Chris (1994). Complete Mini: 35 Years Of Production History, Model Changes, Performance Data. Croydon: MRP. ISBN 0-947981-88-8.
6. Clausager, Anders (1997). Essential Mini Cooper. Bay View Books location=Bideford, Devon. ISBN 1-870979-86-9.
7. Wood, Jonathan (2005). Alec Issigonis: The Man Who Made the Mini. Breedon Books Publishing. ISBN 1-85983-449-3.
8. Nahum, Andrew (2004). Issigonis and the Mini. Icon Books. ISBN 1-84046-640-5. |
E-mail: editor@airwar1.org.uk
Aircraft development
A Pilot's war
WW1 aircraft exhibits
The Royal Flying Corps 1914-18
This site provides an introduction to the history of the Royal Flying Corps and its aircraft during the First World War, together with links to other related sites and suggestions for further reading. Subsidiary sites look in more detail at four squadron histories and the experiences of a number of RFC officers; links to these are on the page "A Pilot's War" tab above.
Recent additions
An American engined Vickers Gunbus in 1915 - the J W Smith Static Motor
AM2 Charles Carter's photos of fellow drivers in the RFC and RAF 1918 (possibly MT Base Depot, Rouen, France)
Updates on the RFC/RAF Personnel List of those mentioned on this site.
RAF' s 95th Anniversary ...
The Royal Air Force celebrated its 95th anniversary on 1st April 2013. See link for the Air Ministers remarks in 1918 regarding "our Flying Men" together with contemporary comment and background.
A brief history of the RFC
At the commencement of the First World War Britain had some 113 aircraft in military service, the French Aviation Service 160 and the German Air Service 246. By the end of the war each side was deploying thousands of aircraft.
The RFC was formed in April 1912 as the military (army and navy) began to recognise the potential for aircraft as observation platforms. It was in this role that the RFC went to war in 1914 to undertake reconnaissance and artillery observation. As well as aircraft the RFC had a balloon section which deployed along the eventual front lines to provide static observation of the enemy defences. Shortly before the war a separate Naval Air Service (RNAS) was established splitting off from the RFC, though they retained a combined central flying school.
The RFC had experimented before the war with the arming of aircraft but the means of doing so remained awkward - because of the need to avoid the propellor arc and other obstructions such as wings and struts. In the early part of the war the risk of injury to aircrew was therefore largely through accidents. As air armament developed the dangers to aircrew increased markedly and by the end of the war the loss rate was 1 in 4 killed, a similar proportion to the infantry losses in the trenches.
For much of the war RFC pilots faced an enemy with superior aircraft, particularly in terms of speed and operating ceiling, and a better flying training system. The weather was also a significant factor on the Western Front with the prevailing westerly wind favouring the Germans. These disadvantages were made up for by determined and aggressive flying, albeit at the price of heavy losses, and the deployment of a larger proportion of high-performance aircraft. The statistics bear witness to this with the ratio of British losses to German at around 4 to 1.
When the RFC deployed to France in 1914 it sent four Squadrons (No.s 2,3,4 and 5) with 12 aircraft each, which together with aircraft in depots, gave a total strength of 63 aircraft supported by 900 men. By September 1915 and the Battle of Loos, the RFC strength had increased to 12 Squadrons and 161 aircraft. By the time of the first major air actions at the first Battle of the Somme, July 1916, there were 27 Squadrons with 421 aircraft plus a further 216 in depots. The RFC expansion continued rapidly thereafter putting considerable strain on the recruiting and training system as well as on the aircraft supply system.
At home, the RFC Home Establishment was responsible for training air and ground crews and preparing squadrons to deploy to France. Towards the end of the war the RFC provided squadrons for home defence, defending against German Zeppelin raids and later Gotha bomber raids. The RFC and the Royal Naval Air Service (RNAS) had limited success against the German raids largely through problems of locating the attackers and reaching the operating altitude of the Zeppelins.
The RFC was also deployed to the Middle East, the Balkans and later to Italy. Initially the Middle East detachments had to make do with older equipment but were eventually given more modern machines. The RFC (in relatively small numbers) was able to give valuable assistance to the Army in the eventual destruction of Turkish forces in Palestine, Trans Jordan and Mesopotamia (now Iraq).
In the final days of the RFC, over 1200 aircraft were deployed in France and were available to meet the German offensive of 21 March 1918 with the support of RNAS squadrons. From 1 April these forces combined to form the Royal Air Force as an independent armed service. From small beginnings the air services had grown by the end of the war to an organisation of 290,000 men, 99 Squadrons in France (with 1800 aircraft), a further 34 squadrons overseas, 55 Home Establishment squadrons and 199 training squadrons, with a total inventory of some 22,000 aircraft.
Major General HughTrenchard as Commander of the RFC in France for much of the war was the driving force behind the expansion of the air service supported by the Director General of Military Aviation Major General Sir David Henderson. General Trenchard was strongly committed to supporting the ground forces and sharing their burden of attrition. He convinced the Army Commander-in-Chief, General Haig, of the contribution of the air service and won his support for the expansion of the RFC in France (against the competing pressures for home defence and a long range bombing force, which ironically, Trenchard was later to command).
A Pilot's war
A Pilot's war gives a more detailed insight into life in the RFC from the perspective of a number of officers.
Return to top
In memory of Sgt Matthew Marmion, 4th Battn. Royal Fusiliers, killed in action with the BEF on 24 August 1914 at Mons, an early casualty of the Great War.
Copyright © 2004 www.airwar1.org.uk
Site map Personnel List |
Inhalational anesthetic induction
Ph: Amin Elkholy
Inhalational induction is common in pediatric patients, and the potent inhaled anesthetic agents are excellent bronchodilators. The only exception is desflurane, which has mild bronchoconstricting activity.
The potent inhaled agents decrease skeletal muscle tone in a dose-dependent fashion, which often improves surgical exposure.
Mode Of Action
Act on GABA-A, glycine, and glutamate receptors.
How to answer Drug Information Request?
How to answer Drug Information Request?
Here is the seven steps you can use in systematic approach for answering a Drug Information Request, if you use these steps you will provide the most evidence answer.
· Name and profession
· (physician-pharmacist-nurse-patient)
· DI Form & DI Answer Form.
· First question asked by requestor.
· Requestor demographics
· Appropriate formulation and delivery method.
· Initial question.
Step 1: Secure Demographics of Requestor.
· Patient specific or academic.
· Age-Height-Weight-history-current problem-medication-allergies-laboratory information.
· Question type.
· Patient background information.
Step 2: Obtain Background Information.
· Select the suitable question category from the question categories.
· Locate the references that contain question category (drug oriented-disease oriented)
· Convert initial question + background information to the Ultimate question.
· Categorize the ultimate question.
· Develop a time line for response.
Step 3: Determine and Categorize the Ultimate Question.
· Exact resource name.
· Further search in 2ndry & primary resources.
· Develop a search strategy and select the recourses.
· Conduct systematic search.
Step 4: Develop Strategy and Conduct Search.
· At least two resources confirmation.
· Perform evaluation and double check of final answer.
· Clinical opinion is important according to the case.
· Confirm with other references.
· Check for final answer, with professional judgment.
· Check for all medication doses and interactions before answer the question.
Step 5: Perform Evaluation, Analysis, and Synthesis.
· Ultimate question.
· The answer.
· Recommendation.
· References.
· Pharmacist.
· Drug information answer form.
Step 6: Formulate and Provide Response.
· Ask the requestor if it was helpful and what is the action taken.
· Store a copy of every DI request in file and on pc.
· Follow your answer.
· Save all data.
Step 7: Conduct Follow-Up and Documentation.
Serious and Life-Threatening Cases for use Harvoni and Sovaldi with amiodarone
The specific changes to the each label are summarized below.
Guidelines for anticoagulant use in mechanical valve pregnant woman
Management of mechanical heart valves in pregnancy is still a clinical dilemma for women and clinician, because of pregnancy itself considers pro thrombotic state and the options available for management of this case still Subject to benefit risk choice.
FDA has approved Bellafill for the treatment of skin condition scars
U.S. Food and Drug Administration (FDA) has approved the dermal filler, Bellafill-polymethylmethacrylate (PMMA) collagen filler-for the treatment of skin condition scars. Bellafill represents a big clinical advancement because the solely filler on the market approved for this disfiguring skin condition. skin condition is that the commonest skin condition within the U.S., poignant 40-50 million people1 and up to ninety fifth of individuals with skin condition could endure to suffer from scarring.
Bellafill was studied extensively before its office approval and established to be safe and effective for the correction of moderate to severe, atrophic, expansive facial skin condition scars on the cheek in patients over the age of twenty one years.
FDA approved Lynparza for treatment of Advanced Ovarian Cancer
The U.S. Food and Drug Administration nowadays granted accelerated approval to Lynparza (olaparib), a brand new drug treatment for ladies with advanced ovarian cancer related to defective BRCA genes, as detected by associate degree FDA-approved take a look at.
Ovarian cancer forms within the ovary, one in all a try of feminine reproductive glands wherever ova, or eggs, are formed. The National Cancer Institute estimates that 21,980 american girls are going to be diagnosed with and 14,270 can die from ovarian cancer in 2014.
Top FIFA players initiate "11 Against Ebola" campaign
This is the primary emergency health campaign of its kind enforced by FIFA and CAF and is galvanized by the dedication of medical personnel at the front line of the fight against Ebola hemorrhagic fever.
Neymar Jr. (Barcelona/Brazil)The campaign is meant to supply a assuasive, positive message to affected communities with straightforward and clear info which will facilitate to combat the unfold of the Ebola virus.
Through the facility and recognition of soccer, the eleven Against Ebola hemorrhagic fever campaign seeks to succeed in as wide associate audience as attainable within the most affected regions and globally by connexion forces with high international soccer players, the planet Bank, FIFA and CAF member associations, native organizations and also the media.
The “11 Against Ebola” campaign, options World Player of the Year, Real capital of Spain’s Cristiano Ronaldo, Barcelona’s Neymar Jr, Chelsea’s Didier Drogba and Bayern Muenchen’s Philipp Lahm. alternative players concerned at the beginning of the campaign square measure Gareth Bale (Real Madrid/Wales), Raphaël Varane (Real Madrid/France), Neymar Jr. (Barcelona/Brazil), Gerard Piqué (Barcelona/Spain), Xavi (Barcelona/Spain), Jérôme Boateng (Bayern Munich/Germany), John cult Mikel (Chelsea/Nigeria), Saint George Davies (Sierra Leone) and Bayern Muenchen coach spirit Guardiola.
Ebola Virus Complete Fact Sheet
Ebola virusEbola virus disease (EVD), better known in past as viral hemorrhagic fever, could be a severe, typically fatal human disease. The Ebola virus causes an acute, serious illness, which is commonly fatal if untreated. Ebola virus disease (EVD) first appeared in 1976 in a pair of coinciding outbreaks, one in Nzara, Sudan, and the other in Yambuku, Democratic Republic of Congo. The latter occurred in an exceedingly village close to the Ebola river, which it takes its name.
Pharmacology Refresh Cards: Noradrenaline - Norepinephrine
Noradrenaline - norepinephrine
Drug name
Sympathomimetic (Vasopressor)
Stimulates alpha-receptors in arterial and venous beds and beta1 receptors of heart, resulting in peripheral vasoconstriction and stimulation of heart rate and contractility. Coronary vasodilation occurs secondary to enhanced myocardial contractility.
Mechanism of action
Cont. infusion
0.01–0.4 µg/kg/min IV infusion via a central vein
Septic shock ,with low SVR
Severe hypertension may occur if noradrenaline is given to patients taking tricyclic antidepressants since tricyclics block the uptake of noradrenaline into nerve endings.
Drug interaction
Myocardial ischaemia
peripheral ischaemia
Side effects
Hypovolaemic shock
Acute myocardial ischaemia or MI
Contraindication & precaution
Pregnancy: Category D
Special populations
Norepinephrine may lose potency in normal saline solution
Administer in D5W or 5% dextrose in saline
If extravasation occurs, infiltration with phentolamine
Administration notes
4mg/4ml amp
Availability in pharmacy
Other notes
Calcium Carbonate versus Calcium Acetate as Phosphate binder
Phosphate binder
Ph D: Neven Mohamed
For many years, calcium-containing phosphate binders (calcium acetate and calcium carbonate) were considered the best choice in the treatment of hyperphosphatemia, due to they are effective, possessed only moderate side-effects and suppressed parathormone levels with the intention of counteracting progression of secondary hyperparathyroidism. In two head-to-head observational studies (CARE, ARNOS), calcium-containing binders showed signals of clinical superiority versus the comparator sevelamer-HCl.[1.2]
Nevertheless, calcium may be absorbed significantly, which may result in a positive calcium balance and actively contribute progressive cardiovascular and soft-tissue calcification in patients at risk.[3] |
Tuesday, November 10, 2015
Pandemic Legacy Disease Backstories
"Robo Fever" (Red)
Affluenza (Blue)
Gakarrhea (Yellow)
"Pluto Pox" (Black)
Monday, February 2, 2015
Pandemic: The Cure - standard deviation and probability
Initial setup
Recently we have been playing Pandemic: The Cure. The goal of the game is (loosely) to cure all the diseases. Each player has a certain number of dice that they roll on their turn (5 for most players, 7 if you're the Generalist) that give them their possible actions for the turn. One of the actions lets you use one of your die to "bottle up" a disease die and at the end of your turn you roll your bottled up disease dice and if the total amount rolled on the dice for a particular color of disease is greater than 13, then you have cured that disease.
Bottling up the diseases is great because it removes that disease die from play and helps you discover the cure, but until you discover the cure that die of yours you used to "bottle" it up is locked up and you can't use it, meaning you'll have fewer possible actions on your turn, making you less effective until the cure is discovered.
Disease Dice
Each color of disease die has different face values from the other colors'. Each one has a "Cross" face (value 0) and 5 other values. The average value of the faces on each die is 3 but since the values are different the standard deviations of the values on the dice are different. The values on the faces of the dice are as follows.
Std Dev1.6722.532.68
So in terms of trying to cure the diseases, the likelihood that the total of the values across all the dice you roll of a color will meet the required sum is different. Below is a table of probabilities of curing the disease with various numbers of a color of dice. The amount needed to cure a disease is normally 13, but sometimes can be 11.
# Dice11+13+11+P13+11+13+11+13+
You'll see that for certain numbers of dice and goal numbers to reach, the probability of curing the disease can be quite different. For example, with 3 dice and a goal of 13 the probabilities range from 7.41% to 23.15%. Most differences are <10%, but that can be a fairly significant difference.
You'll see that no die is universally easier or harder to find cures with. Getting 13+ is only really possible once you have 4 dice. A goal of 11 isn't very likely until you have at least 3 dice, and even then the odds are very bad. It's very hard for a single character other than maybe the generalist to amass 4 or more dice by themselves. After you have 3 dice bottled up you only have two dice left. So getting the 1 in 6 result of being able to bottle up on your dice when you only have two dice is fairly unlikely. The game allows you to trade your bottled up dice to another player if you're on the same square. This probability table tells me that that's a very important part of the game.
Advanced discussion:
In the above two tables, I ordered the dice colors by their standard deviations, lower on the left and higher on the right. One thing you might notice is that for 3 dice, the higher variance dice (aka higher standard deviation) have a higher probability of success. You'll notice that for higher numbers of dice, the colors with a lower standard deviation tend to have a higher chance of success.
When you have 3 dice, the average value of the sum is 9 (because the average for any given die is 3). Nine is insufficient for either goal so results near the average are bad. So you want a result that's far from the average, meaning you want a higher standard deviation. When you're at 5+ dice, the average result, 15, is above the goal so lower standard deviations are better.
When average is bad and you want that extreme result, you'll do better with a higher standard deviation. When the average is good and you don't need an extreme result, you'll do better with a lower standard deviation.
Player Dice, showing all faces
Epidemic Roll Change
Another place where probability plays a big role (roll?) is with epidemics. Each character die has one face which, when rolled, will advance the epidemic track. The generalist, with their seven dice instead of the normal 5, stands a much greater chance of rolling these values on their turn. To balance this, the generalist is allowed to ignore the effect of the first epidemic they roll each turn. This has a huge effect, and it makes the generalist have an overall lower change of advancing the epidemic track than other characters. Below is a table of probabilities for how far each character will advance the epidemic track on their initial roll of dice (with full dice i.e. no dice locked up from bottling up diseases).
The other advantage of being the generalist is that when you have no epidemics on your initial roll (28% of the time w/ 7 dice, higher w/ fewer) you can freely reroll dice to try and get a better result w/ no fear of the consequences of rolling an epidemic.
Wednesday, January 7, 2015
The Core Loop vs The Revenue Funnel
Here are some thoughts I had the other day about F2P game design as "loops" versus the common analytical tool of a "funnel" and how the design goals of these games collide against the business decision of being F2P. For more ideas of what a "core loop" is, a Google Image search will give you lots of examples.
Mike Sacco coined a nice term for this combination
Sunday, January 4, 2015
I kinda work in the games industry
Wednesday, December 31, 2014
The most important games to me from 2014
Episode 5 of five out of ten magazine features an essay by Brendan Keogh about games exactly like Threes. About how these deeply systemic game are hard to talk. Graphics, story, sound, and some elements of gameplay are easy to talk about, but systems are harder. Threes is kinda like Tetris in that new tiles are always entering the board and you have to figure out how to combine them to make more room. In Threes there are things you have control over (how you're going to shift and combine tiles) and things you don't have control over (what tile is coming up next and where it enters). You're given enough knowledge about what tile is coming next and where it can enter that you can make intelligent decisions about what to do.
It's the things that you don't have control over that make the things you do have control over fun and interesting. Threes is a focused example of how random effects, information, and choice mix together to make an amazing gameplay experience.
I reached my pinnacle in this game in October by getting a 3072 tile on my board. I've done it a couple more times since then but I highly doubt I'll ever do better. But I keep playing.
FTL: Faster than Light
FTL came out in 2012, but it received a significant update this year, which is when I really fell for it. FTL is a roguelike, which is an utterly useless descriptor, unless you're familiar with the game Rogue, in which case it still doesn't tell you anything about the game. Roguelikes are games that have a relatively short duration but make up for that by using random generation to create replayability. Roguelikes must have a definite end goal and be hard. Failure must be an reasonably possible outcome.
FTL has you controlling a spaceship and it's crew, racing to alert the Federation of the oncoming rebel threat, like a reverse Star Wars. As you play you'll defeat enemy ships and get scrap and other material which you use to upgrade your ship. FTL asks you to overcome increasingly difficult enemies by figuring out where best to spend your scrap to complement your ships current build and the enemies your facing. Every run through FTL feels different even if you're using the same ship (of which there are 29).
I love Hearthstone. In particular I love Hearthstone Arena. I love it for all the same reasons I love the above two game. Hearthstone Arena asks you to construct a deck out of randomly selected cards that are presented to you three at a time to use against other people who have similarly constructed a deck. Hearthstone arena is great because you don't have to buy the cards to use in it. You don't even have to pay to enter unless you don't have enough gold, which brings me to my second point.
Hearthstone's daily quest system is perfect. You get a quest every day, you can save up to three daily quests, and every day you can reroll one of your quests. If you want to play casually you can reroll quests to try and get quests that can be completed at the same time. If you want to be hardcore and try to get the most coins possible from quests you just reroll your 40 gold quests to try and get 60 gold quests. They challenge you to try new classes but offer you enough flexibility to avoid them if you want.
If the above games are about choice granting you power and control to face randomly generated adversity, then PT is the opposite of that. It's not random. You have no power. PT is the scariest fucking shit I've ever played and it's free if you have a PS4. You're stuck in a hallway with no way to fight what haunts you. You can escape, but you have to figure out how, and it's not easy. PT is like nothing I've ever played before.
Goat of the Year 2014
Escape Goat 2
I know that Goat Simulator got more attention for it's title, wacky gameplay, and satirical bent but I enjoyed this game far more. Platformers are one of my favorite categories of games. Puzzles too. Puzzle platformers tend to fall flat but Escape Goat 2 manages it perfectly. It stays fresh and fun throughout without becoming impossibly obtuse, which is what generally happens with puzzle games. It has charming graphics and sound. It has a goat and a mouse. There's one puzzle that comes to mind that was really just too hard, but I was able to look up a solution fortunately. I don't really have too much to say about this game really. It's just a really solid game that deserves more attention than it got.
Desert Golfing
Desert FUCKING Golfing. Desert Golfing is an incredibly simple game. It doesn't integrate with facebook or twitter. There are no in-app purchases. Contrary to mobile game best practices, it costs $1.99 to download. There's no daily bonus that begs me to log back in. Its feels like a rebellion against F2P and social gaming. It's the complete opposite of current trends.
You just golf. Place, drag, release. Place, drag, release. Place, drag, release. Next hole.
The difficulty comes and goes. When it's hard you're relieved to get past it. When it's easy, you celebrate the skill you've gained.
Nobody at work understands why I love Desert Golfing.
I'm stuck in Desert Golfing, stage 2303.
UPDATE: I loaded up Desert Golfing right after writing that sentence and beat that stage first try. IT TOOK ME SO MANY TRIES
Other games I really enjoyed but don't really have words for right now:
Mario Kart 8
Shadow of Mordor
Monument Valley
Shovel Knight
The Uncle Who Works for Nintendo
Monday, October 20, 2014
Final Fantasy VI
As Alexa Corriea pointed on out twitter:
I was 8 years old at the time, but I can't for the life of me remember when I actually got the game. I don't remember a lot of things, it turns out. I do, however, remember playing the game quite a bit. To say that Final Fantasy VI is a big part of development as a gamer would be an understatement. It, EarthBound, Chrono Trigger, Secret of Mana, and Illusion of Gaia combined to form a quintet of RPGs that were and are very important to me to this day.
FF6 struck me with it's story and character, filled with twists and turns, a large cast of interesting characters, and brilliant villains. I loved the characters so much that I used to pretend that I was a member of their team, hanging out with them on board the Blackjack or the Falcon. Part of that was because I was a fairly solitary kid. I didn't have very many friends, nor did I hang out with them very much outside of school. It's not that I was a reject, I just didn't try to make friends or try to hang out with them. I was very happy in my world and in the worlds of the games that I played. I suppose I was also pretty publicly a nerd, and I didn't really know how to talk to people, and I had trouble making eye contact, but I was doing alright by it.
I think part of the reason the characters are so strong is because you meet them in the World of Balance, regain them in the World of Ruin and find out how they react to this disaster, and then dive into their past and history in their optional sidequest. I miss sidequests, I think they're really important to developing the game world's story.
My brother and I both played FF6 a lot, even together. We watched each other play and offered tips. We didn't play too many video games together once we got much older. We drifted apart, he got his own room, we stopped playing as many games together. Eventually he'd start misbehaving, doing drugs, causing trouble, and making family life difficult. Things have gotten better but we're still distant and I still reminisce about those old days when we'd play together.
I played Final Fantasy VI over and over all the way up and through junior high, periodically dipping back into the game for nostalgia trips when I felt I needed them. I moved on to other things in high school, when we got a PlayStation 2 and FF9, FFX, and Kingdom Hearts were the RPGs that I played. When art went off to college and wanted to take the SNES with him, I obliged. When he dropped out after a semester and moved back home. A lot of the SNES games didn't come back, particularly the RPGs that I loved, that we had bonded over. When I questioned him about where they had gone, he said that he had loaned them to people and hadn't got them back. I pressed him about getting them back, but he always pushed it off. I not think that he probably sold them. I can only imagine what he did with the money. I've never really talked with him about this. We don't ever talk about that time in our family's life.
I almost always played with Sabin and Edgar in my party. I don't know if it's because they're strong or because I just wanted to see brothers that were distant yet loved each other.
I started playing piano in 5th grade and my former kindergarten teacher was my first instructor. I took lessons all the way through high school. For a brief period I took lessons from a jazz piano instructor. Once while there for a lesson I saw a book of piano music that belonged to one of her students. It was a collection of sheet music for FF6's soundtrack. I asked begged her to ask her student where he got the music and when she found out and told me, I ordered a copy immediately. Once I started college I didn't do a good job of staying in practice. Pretty much the only music I would keep playing was music from my FF6 collection and a collection of songs from across all the Final Fantasy games.
The music of FF6 is very deeply ingrained in me. Sometimes I feel that the way that I can best express emotion is by playing its music on the piano. I've purchased it's soundtrack in various forms and arrangements time and time again. It takes me back in time, helps me remember, reminds me of friends I haven't spoken to in a long time. It takes me back to when I was playing the game growing up. FF6 is so important to me. It's hard to say that my life would be different had it never existed, but as it stands I find it hard to imagine it'd be the same.
Sunday, November 17, 2013
On Cycles
Flower - a PS3 game that's been upgraded for PS4
I got my Playstation 4 the other day and it was just a little over seven years ago that the Playstation 3 came out. At the time I was a junior in college and had snagged a reservation by camping out in front of my local Gamestop with about 5 or 6 other people, including two of my college suitemates. It was a fun experience but definitely not something I plan on doing again, especially since I did the same for the Wii two days later. I had a lot of trouble staying awake the day after camping out for the Wii.
Thinking back on who/where I was at that time and all that's happened since then has been really interesting. It's easy for me to think of the PS3 launch as having been "not long ago" but when I think of everything that's happened it starts to seem more like "really long ago". I've graduated twice, had 5/6 different jobs, lived in 7 different places, gotten married, moved across the country, been to Blizzcon 3 times, and so much more. Seven years ago I had never played WoW, my parents were still thinking they'd retire to the retire to the country, and I'd never had a cat. And despite being "liberal", I was completely ignorant about social justice issues (which I hear is pretty typical).
This is all probably not surprising since it was over 25% of my life ago, but it's really easy to forget how much can happen in a period of time that seems so short.
It's kind of weird to only be thinking about this because a new video game console came out but I think it totally makes sense. As a gamer, these consoles and the experiences I have on them are not only significant to me but they also form the background of my life experiences. When I think about a game or a console I don't just remember the things that happened in the game but also the who and where I was and the what was happening in my life at the time I was playing. For example if I think about Kingdom Hearts or Final Fantasy X I think about talking with my friend in the high school parking lot. If I think about Metal Gear Solid 4 or Mega Man 9 I remember living in my parent's house after college and in the first couple months on grad school. Journey wan't just a fantastic gaming experience but I also remember having the front door open to the house we were renting at the time and waiting for Sarah to get home from work. With an MMO it's possible to have distinct attachments to expansions because of the real life experiences that were happening during each of them.
I think this is why we can get nostalgic for old games even if they aren't good, even if newer games in that series are "better". By starting up that game and playing it you can transport yourself back in time to when you were first playing it. Sonic games will always be tied to when my brother and I shared a room when we were very young, before he moved out into his own room and we began to drift apart. SSX reminds me of the Christmas when we had an ice storm in Arkansas and we had to stay with a family friend until the power came back.
It's said that smell is the sense that has the strongest tie to memory. Have you ever smelled a food and just been transported back to some great childhood memory of eating something tasty? Perhaps this is because smell is often used to identify things that might be poisonous or otherwise bad for us if we were to try and eat them. But wouldn't it make sense that action has a stronger tie to memory. Playing an old game can not only be fun but it has the ability to take you back in time.
This console launch has me remembering who I was in college, and thinking about everything that's happened in the interim. Much has happened and I've really grown a lot as a person in the mean time. I've met a lot of people and done a lot of great things. A console cycle can sometimes feel short, but a lot can actually happened. I can't help but wonder what's going to happen between now and the next generation of consoles.
Wednesday, October 9, 2013
Board Games!
This is actually an old picture, it's gotten much worse.
Anyone who follows me on twitter probably has noticed that I've been talking about tabletop games quite a bit lately. In the past couple of months Sarah and I have added significantly to our collection. For an idea of what I'm talking about, look to the right. It's gotten much worse since then.
We have 56 games, in total. Not all of them are in that picture, because some of them are actually behind the others. For example, you can see Munchkin there but we actually have other versions of Munchkin, they're just stashed behind Carcassonne, Ghost Stories, and Yahtzee.
There's quite a variety there, too. You see classic games like Risk and Monopoly, but there's plenty of other games too. There's the cooperative fire-fighting simulator Flash Point: Fire Rescue. There's the popular Eurogame about connecting train routes Ticket to Ride. There's the literally-only-sixteen-cards get-your-love-letter-to-the-princess simulator Love Letter (a truly excellent game). There's also the dexterity-challenging magnet-balancing game Polarity.
This might seem like a sudden shift for me but it's really a natural extension of a trend that's been going on for roughly a decade.
Forbidden Island, a cooperative game where you play as treasure hunters trying
to get four relics from an island before it sinks.
Seeking new things
I can't really say that I know what caused it. Maybe it was because of my friend the next room over my freshman year of college. Maybe it's because that was the year Katamari Damacy was released. Maybe it's because that was the year that the PSP and the DS were released, and I hadn't really been into portable gaming since the original Pokemon some long time prior. Or possibly it's because of all these things. Ever since that year, however, I've constantly been seeking new gaming experiences. I would rather play small, mediocre, yet novel games as opposed to a full-price game that's well polished yet doesn't bring much new to the table. In the past this has meant playing mobile/handheld games and download-only games, but now it's extending to tabletop games.
Tabletop games really offer a lot of things that video games don't.
Meat Space Nine
Tabletop games are all about playing with your friends right around you. You can see and talk to each other in ways that are hampered by communicating over headset or having to share real estate on a screen. This isn't to say that there aren't great video games that you can play with your friends all on the same couch, Smash Bros. and Towerfall and great examples of such, but this is what board games are all about. It is their jam.
Dungeons & Dragons: Castle Ravenloft
No need for dexterity
Tabletop games are almost always turn-based as well (Escape: The Curse of the Temple notwithstanding). Many people can't play competitive video games because of a reliance on manual dexterity, fast reaction times, having to juggle a lot of information without time to think, or they can get nauseous in the case of a first-person game. This gives tabletop games an extra level of accessibility that video games don't have.
Most video games go to great lengths to keep you from playing them in ways that the developers don't intend. You can't make your own rules, except on a social level ("Nobody's allowed to pick Oddjob, okay!?"). You can't add and remove components. You can't do anything, usually. Tabletop games literally cannot avoid this. Don't want to play with a particular rule? GONE. Want to add your own class to the roster of characters? DO IT. Want to add a rule or more content to the game? EASY. Think something is unbalanced? CHANGE IT. They're literally powerless to stop you. This makes them great for budding game designers to experiment with how changing rules affects the gameplay or for hobbyist to make something that they love even better. If I think that Smash Bros isn't balanced well it takes a ton of effort to make it more balanced. If I think a Dungeons and Dragons class is unbalanced, that's easy to fix.
More apparent mathiness
One thing that really appeals to me in particular is, in addition to their hackability, is that their turn-based nature makes it easier to see the math behind the game and optimize your gameplay. For example, in Ticket to Ride, you get 1 point for a 1 train section, 2 for 2, 4 for 3, 7 for 4, 10 for 5 and 15 for 6. Here you can easily see that you get more points per train from doing longer routes and should try and do those if possible. This advantage becomes even more clear when you realize that by playing trains there is an opportunity cost in that any turn spent playing trains is a turn in which you aren't drawing cards. So you could spent two turns playing 3 trains each and be down 6 cards and only have 8 points or you could spend two turns, 1 playing 6 trains and 1 drawing cards, and have 15 points and only be down 4 cards.
I have by no means given up on video games. I still love and play those. Most recently I've been playing a lot of Hearthstone and Spelunky.
Friday, June 7, 2013
Your ideas about WoW players are wrong - engagement bias
Not that kind of engagement
That sentence might seem a bit surprising but I can assure you that it's 100% true. The primary culprit here is engagement bias, which is something you have to consider when you're analyzing a game-as-a-service, like WoW. Suppose 7 million people play WoW in a given week. Let's look at them by how engaged they hypothetically are (as measured by how many days they played that week).
EngagementPlayer Count% played today# played today% DAU in bucket
1 Days1,000,00014.29%142,8573.57%
2 Days1,000,00028.57%285,7147.14%
3 Days1,000,00042.86%428,57110.71%
4 Days1,000,00057.14%571,42914.29%
5 Days1,000,00071.43%714,28617.86%
6 Days1,000,00085.71%857,14321.43%
7 Days1,000,000100%1,000,00025%
Here we see that if you look at the people who play on a particular day (DAU - Daily Active User), there is a distinct bias towards users who have a higher weekly engagement. Side note: players who play in a given week are called WAU. Even though the WAU are evenly distributed among the engagement buckets, the DAU are heavily skewed towards the highly engaged. Then again, WAU isn't how Blizzard likely defines 'player' for WoW, they likely use subscribers as the definition of the player, since that's how they get their money and the $15 a low-engaged player gives them is the same as the $15 a heroic raider sends them.
What this means is that the people that you see every day in the game aren't really a good representation of WoW's subscriber base. People aren't as engaged with the game as they appear to be. From a development and design standpoint, the highly-engaged users are the least likely to let their subscription lapse, so features are often made to appeal to the casual crowd/make casual players more engaged. If you look at the history of WoW this is what you'll see. Even heroic raiding was oriented around this because it allowed them to make regular raiding easier and more accessible to the casual player.
This is just one example of engagement bias, which is a recurring problem in user-centric data analysis and therefore is a recurring problem in the games-as-a-service industry. Engagement bias is the phenomenon that more active users are often more likely to be counter/sampled.
Back when I was working on analyzing the results of my 2011 WoW Survey, one question I wanted to answer was "What are the correlations between classes?" meaning that I wanted to know which classes a player was more or less likely to play if they played another class. For example, "Are people who play Warlocks more or less likely to play a Death Knight than someone who plays a Rogue?"
Suppose that the average respondent to my survey listed two different classes among the ones that they play. At the time, this means that roughly 20% of respondents played any particular class (class representation actually varied wildly). When I pulled the percent of Warlock players that ALSO played Paladins I found a much higher number, 40% or greater. This baffled me for a long time. For each combination of classes, this same thing happened, the percentage of X players that also played Y was higher than the percent of the general population that played class Y.
Why was this?
Among the people that I surveyed, they varied widely among the number of characters they played. Some people only listed 1 or 2 characters, some listed 10 or more. When I selected all the players who played a Warlock, the highly-engaged players (those with more characters) were more likely to be in that group than the low-engaged players (those with few characters). So the group of Warlock players had, on average, more characters than the general population. So when I calculated how many of them also played Paladins, I received a much higher number than with the general population.
Of course, there was something else that would skew the results of my analysis. I got my data not via the actual numbers but by getting survey results that mainly came from MMO-Champion. Since these are people that are participating in the WoW community, they are going to tend to be more engaged than the general WoW playing population.
Engagement bias is just one of the many things you have to keep in mind when you're analyzing game players. For example, during my WoW survey, I also found that MMO-Champion users tend to skew more male than respondents from other sources that I've used. For this reason and more when I was doing my analysis I was careful to make sure to state that the numbers were not to be taken as absolute facts, but as being "directional", meaning that it'll likely indicate what the differences between two groups or what may be more or less popular for a group even if the exact values aren't true for the overall population.
This is just one of the slew of problems that you run into when doing user-facing data analysis, something which I'll be covering in a later post.
Thursday, June 6, 2013
Wildstar just might get me to switch from WoW
I know I just got back into WoW but WildStar looks really phenomenal. WildStar is a beautiful looking MMO that's currently in development. It looks absolutely fantastics. The characters look very expressive and the environments look fantastic. It's currently in beta right now and I'm really enjoying seeing how it turns out. There are two factions, the Dominion and the Exiles. Right now it looks like there are currently 6 classes, of which four have been revealed and they all look really cool.
One of WildStar's features is one that I wish WoW had, player housing. In WildStar, your house floats on a rock in the sky and is highly customizable and interactive. There are several different house models, it can get attacked, your friends can visit it, and you can return to it from anywhere at anytime. This will be a really great place for me to log in and log out so I don't log in and have the first thing I see be a mass of people. People stress me out, and having this space to get into the game will be really great.
Another really cool looking feature is Paths. Just like WoW, WildStar will have races and classes but in addition to that it will have Paths. Paths are all about the content that you like to do. If you like to fight, be a soldier. If you like seeing all the sights, be an explorer. If you like to learn all the lore, then be a scientist. And if you like to craft things, then be a settler. Soldiers get more combat content, scientists get missions to examine objects, explorers head to remote areas, and settlers build building and other things. Any race/class can be any one of the classes and having them work together provide great benefits to a group.
There are tons of other things that look great about Wildstar, like movement. It not only has jumping, but double-jumping. It also has rolling and dashing. It looks like it's currently targeted for release later this year. |
Where Do Turtles Live
Information on where do turtles live
So where do turtles live exactly? The answer to that question depends entirely on what kind of turtle you’re asking about. In general however, most turtles prefer warm to hot weather, and you can generally find turtles near or in water.
Sea turtles are one of the most interesting species of turtle. The adults of many of the sea turtle varieties can usually be found in shallow waters along coastlines, although some of the larger sea turtles also swim out into the open sea. Usually the juvenile sea turtles can be found in bays and estuaries because they are not yet big enough to be able to handle the big currents and waves found out in the open sea.
Sea turtles are migratory creatures, and the migratory habits of all sea turtles varies from species to species. Green sea turtles are one variety that is known to migrate along coastlines between nesting and feeding grounds. In contrast, loggerhead turtles leave their foraging areas and travel miles each way on breeding trips.
There are many other varieties of turtles also, many of which don’t live in water full time. For example, the desert tortoise lives in a hot, dry habitat and eats grasses. Another type of turtle, the Malayan Box Turtle, lives in a hot habitat where water is plentiful, but it eats pretty much anything it can find in the water. The Diamondback Terrapin actually lives in slightly salty coastal waters off the eastern and southern coasts of the United States.
Another important aspect of the turtle’s home is its shell. All turtles live in a shell, whether they spend most of their time on land or in the water. However, the shells of each species of turtle offers different protections and a different type of home for each kind of turtle. Some turtles even have a hinge on the lower part of their shell. That hinge allows them to go right into their shell home and shut up the doors, both front and back. Some turtles that have hinges can’t completely close up. Aquatic turtles, on the other hand, do not have hinges on their shells. Their flesh is exposed in the front and back of the shell, even when they are inside their little homes.
Some turtles don’t get much protection from their shells. For example, snapping turtles aren’t very well protected by their shells. However, they tend to try not to get into trouble. Softshell turtles have more of a leathery shell instead of the hard shells you normally think about when it comes to turtles. However, softshell turtles spend most of their time in the water and enjoy the camouflage offered by their leathery shells.
Turtles have been alive for over 230 million years, and it’s pretty clear that turtles have evolved to be able to handle pretty much any habitat. Turtles are truly unique creatures because of the wide variety of habitats they live in. |
In Shakespeare's Twelfth Night, why is Malvolio punished while Maria is not?
1 Answer | Add Yours
tamarakh's profile pic
Posted on
Malvolio is certainly mistreated by Maria, Sir Toby, Feste, the clown who play a prank on him and have locked up in a dark room for madness. It was Maria who devised the scheme of giving Malvolio a letter supposedly written by Olivia to make Malvolio believe that Olivia is in love with and asking him to dress and act like a lunatic. Olivia sees Malvolio acting quite foolishly and Maria and the others have Malvolio locked up as punishment for his madness. However, while the play doesn’t carry out far enough to show us, at the end Olivia agrees with Malvolio that he has been very wronged by Maria and the others and promises that they will be punished for their wrong-doings
We’ve answered 333,675 questions. We can answer yours, too.
Ask a question |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.